Advertisements

Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Category Archives: vSphere

ZXTM Virtual Appliance Flies on vSphere

 

I wrote about my favourite software IP traffic Manager, the ZXTM in a previous post.

Zeus have just released a white-paper benchmarking their ZXTM virtual appliance running on vSphere.

Interesting to note that for plain HTTP traffic they noted a 25% performance increase over ESX 3.5u4 and were able to max out the 4Gb/s links configured to the VMs and host machine – indicating that there is a very low overhead for the vSphere 4 hypervisor layer in handling network traffic.

The ZXTM can rapidly serve items from it’s cache as well as handling load balancing/URL redirection/writing etc. – which in a production environment would mean offloading traffic from the web server itself with the net-result being fewer web-servers and consolidated ZXTM VMs.

Interestingly, multi vCPU configurations performed better than a native install of the OS, which would indicate the vSphere 4 hypervisor is more efficient at CPU scheduling than the native OS (x64 Ubuntu).

whilst there is a higher CPU overhead for handling SSL transactions as it needs to decrypt and process traffic on the CPU itself the improvements in multi-vCPU performance and low network overhead mean that if you were building a large-scale web platform you could treat the ZXTM as a scale-out SSL offload engine, but do it on commodity, virtualized or physical x64 hardware rather than specialized ASIC type hardware (Cisco ACE etc.), with the end-result being a more flexible architecture at a lower cost and no hardware-vendor lock-in; “it’s just software”.

The performance whitepaper is available from Zeus here; you can download an eval copy here and a colleague also has some interesting articles about ZXTM configurations on his blog here.

Zeus are a key part of my cloud reference architecture, and offer service-provider type licensing as well as full support for virtualization – including HyperV which play well to deliver flexibility either for private, public or hybrid cloud solutions.

Advertisements

modl.fault.MethodNotFound error when adding ESXi host to vCenter

 

I have been gradually rebuilding my home lab and adding a new HP ML115 G5 server (which is capable of running the new FT feature) as I plan to build an ESX inside ESX cluster to run an FT implementation on a single box (info on how to do that here).

Once I had installed my virtual ESXi hosts I ran into a problem trying to add them into vCenter as hosts,  I kept getting an error modl.fault.MethodNotFound and an error about SSL certificates.

I tried several reinstalls, re-creating the VM and even a clean install of vCenter to no avail, following some twitter suggestions I downloaded a newer build of ESXi (build 171294) – and it worked 1st time. the build I was using was the one I downloaded on GA day (build 140815), so moral of the story is that it’s always worth checking the website for updated builds.

When you do this, it’s also worth updating to the latest vSphere client, I found some oddities in the UI that resulted in a red cross while trying to enable a VMKernel port to act as the FT logging interface.

I also have some problems enabling the VUM plug-in on that machine so hopefully a client upgrade fixes that.

It looks like all of the products (ESX-classic, ESXi, vCentre) have significantly updated builds released since GA.

Screenshot showing 3 x physical hosts and 2 x virtual ESXi hosts in a cluster – all managed by a single vCenter instance

image

How to Deploy a Windows 2008 Server From a Template with vSphere

 

With ESX 3.5 and Virtual Centre 2.5 you needed to copy a bunch of sysprep files to use the excellent template deployment functionality (step by step account here)

Now that vSphere supports all the newer versions I had to update my Windows 2008 templates

image

There has been some confusion over how you deploy Windows 2008/Vista from a template in vSphere Virtual Center 4.0 and have it sysprep’d ready for use. The good news is – you don’t need to do anything special; you don’t need to put sysprep in a particular directory on the VC box as in Windows 2008 & Vista as there is no longer a separate sysprep download, it’s built into the default Windows OS installation

image

Just use the customization specification manager and it can even set the IP address of your new virtual machine as part of the template deployment.

image

Under the hood it injects a sysprep unattended/answer file into the OS as it boots and does all the customisations for you based on the specification you created/imported from vCenter 2.5

image

So all you need to do is get your master VM built with the OS, patched,  VMtools installed and you can shut it down, convert to template and then just use the deploy from template wizard going forward.

image image

vSphere ESXi as a VM – VMKernel Traffic Not Working

 

In the lab I am currently working with I have a set of vSphere 4 ESXi installations running as a virtual machine and configured in an HA cluster – this is a great setup for testing VM patches, and general ops procedures or learning about VMware HA/DRS/FT etc. (this lab is running on a pair of ML115 g5 servers but would work equally on just one

image

Everything installed ok and I can ping the virtual ESX servers from the vCenter host that manages the cluster (the warning triangle is that there is no management network redundancy – I can live with that in this lab.

All ESX hosts (physical and virtual) are connected via iSCSI to a machine running OpenFiler and the storage networking works ok, however when I configure the vMotion & FT private networks between the VM ESX hosts I cannot ping the vMotion/FT IP addresses using vmkping – indicating that there were some communication problems, normally this would be a VLAN issue or some routing but in this instance all the NICs and IP addresses for my lab reside on a flat 10.0.0.0/8 network (it’s not production, just a lab).

image

image

After some digging I came across this post for running ESX full as a VM, and noted the section on setting the vSwitch to promiscuous mode so I tried that with the vSwitch on the physical ESX host that the two ESXi VMs were running on;

image

And now the two Virtual ESXi nodes can communicate via vmkping

image

Problem solved and I can now vMotion nested VMs between each virtual ESX host – very clever!

vSphere – How to Enable FT for a Nested VM

 

As in my previous post; I am working on a lab with virtual ESX4 servers in it – I can vMotion VMs from a physical vSphere cluster into the virtual vSphere cluster perfectly and performance is very good (just 1 dropped ping in my testing)

One of the physical hosts belongs to www.techhead.co.uk which he has kindly lent for this joint experiment – see his posts here, here and here on running vSphere on these HP ML115g5 servers and their FT compatibility. We have some joint postings in the pipeline on guest performance with complicated apps like SQL & Exchange when protected via FT , so keep your eyes peeled.

As the physical ESX hosts themselves are FT compatible I thought I’d see if I can enable FT for a VM running inside a virtual ESX server cluster, so a VM running inside a hypervisor, inside another hypervisor..!

image

image

Our of the box, unfortunately not; as it gives the following error message 😦

Power On virtual machine

Record/Replay is not supported on this CPU for this guest operating system. Vou may have an incompatible CPU, you may have specified the wrong guest operating system type, or you may have conflicting options set in your config file. See the online help fot a list of supported guest operating systems, CPUs and associated config options. Unable to enter fault tolerance mode.

To work around this you can enable the following advanced (and likely totally unsupported) settings to enable FT on the nested VM (the default is/was false) (thanks to the comment on this post for the replay.allowBTOnly = TRUE setting!)

image image

And here it is – Nested VM running, with FT enabled

image

Very nice

Later on you can see some warnings about hosts getting a bit behind, also I had some initial problems getting FT to bring up the 2nd VM properly, the UI said it was restarting and it got stuck there, I dropped the virtual ESXi host down to a single vCPU rather than two and it worked ok from then on. I decided to do this as the virtual ESXi nodes were coming up reporting 2 x Quad core CPUs; whilst the physical host only has a 1 x Quad Core CPU so I guess that was causing some confusion.

At this point both of my virtual ESXi hosts were on the same physical vSphere server, and I seemed to have problems with the secondary getting behind. (vLockstep interval)

In this instance my nested VM is running an x86 Windows 2003 unattended setup.

image

image

I vMotioned one of the virtual ESXi hosts to the second physical vSphere server (very cool in itself) and it seemed to be better for a while, I assume there was some CPU contention from the nested VM.

image image

However in the end it flagged up similar errors, I assume this is due to the overhead of running a VM inside a hypervisor, inside another hypervisor 🙂 this is a lab setup but will prove very useful if you have to learn about this stuff or experiment with different configurations.

This is probably totally unsupported, use at your own risk – but it does work well enough to play about with in the lab.

Cheap vSphere Server

 

If you have a home lab setup or want to get going with learning VMware’s new vSphere product you will need an x64 capable machine to run it on, although it does also run under VMware Workstation too – even supporting nested VMs and physical ESX to virtual ESX vMotion! unfortunately it won’t run on my trusty old HP D530 desktops which I’ve used to run ESX 3.5 over the last year or so.

My lab setup uses a couple of HP ML110 servers, they are low-cost and pretty capable boxes, for example they both have 8Gb of RAM and cost me less than £350 GBP each with RAM and disks (although I’ve added storage from my spares pile).

If you are in the UK Servers Plus have some great deals on HP ML series servers which are great for home lab setups – see some of Techhead’s postings on his findings with the ML1xx range here

Linkage to Servers Plus £199 +VAT servers here (www.serversplus.com) if you tell them vinf.net or techhead.co.uk sent you they may cut you a deal on delivery as they have done in the past (no promises as I’ve not had a chance to speak to them).

image image

A note of caution if you are looking to try out the cool FT features of vSphere you will need to purchase specific CPUs, which may be more expensive – there is a good list of compatible CPUs on Erics blog here and some more reading here

Check before you buy you can lookup the manufacturers part code to check with CPU each model has – or check with the supplier.

image

The CPUs I have in my dual-core Xeon ML110G5 is not compatible with FT 😦 but it does look like the AMD quad-cores may be compatible, but check 1st – don’t take my word for it I HAVE NOT TRIED IT  but I would like to if someone wants to donate one 🙂 

UPDATE: the ML110G5 with the AMD Quad Core CPU IS VMware FT compatiblesee link here for more details; I am ordering one now!

If you are interested – here are some performance charts from my home lab running vSphere RC on an HP ML110 with 8Gb RAM and 2 x 160Gb SATA HDD’s whilst doing various  load tests of Exchange 2007 and Windows 2008 with up to 500 concurrent heavy profile users (these stats are not particularly scientific but give you an idea of what these boxes can do, I’ve been more than happy with mine and I would recommend you get some for your lab)

Exchange Load test 1 - CPUExchange Load test 1 - Network Exchange Load test 1 - Disk 

These are some general screengrabs, note there are lots of warnings showing – this is what happens when you thin-provision all your VM’s and then one fills up rapidly making the VMFS volume itself run out of space – you have been warned!

 image image

I’m running 15 VMs on one ML110, the 2nd box only has 1 VM on it as I wanted to see how far I could push one box, I’ve not found a real limit yet! it runs a mix of Windows 2003/2008 virtual machines, and it doesn’t generally break a sweat – note the provisioned vs. used space columns – Thin Provisioning 🙂 and I’m also over-subscribing the RAM significantly.

image

vSphere Downloads

 

Now it’s out these screengrabs show you the downloadable binaries

Still has the classic (with service console) and ESXi versions

image

vCenter and all it’s utilities

image

vCenter heartbeat, Data recovery and vShield zones, interesting that they have packaged data recovery manager and vShield Zones into one {large!} download.

image

No Cisco Nexus NX1000V on the VMware.com site, but you can register for a free 60 day eval here

vSphere, Come and Get it

 

Well today is the day, GA or general availability of VMware’s new flagship product – vSphere (formerly known as Virtual Infrastructure 3).

The VMware.com homepage is showing the download link for a free 60 day trail.

download page is here

image

for some of my vSphere articles and many more check out these links

http://www.yellow-bricks.com/2009/04/21/vsphere-linkage/

https://vinf.net/2009/02/26/hands-on-lab-01-vsphere-features-overview/

https://vinf.net/2009/05/08/vsphere-rc-ram-under-vmware-workstation-how-low-can-you-go/

http://en.wordpress.com/tag/vsphere/

vSphere RC RAM Under VMware Workstation: How Low Can You Go?

 

Getting ESX (in it’s various versions) to run under VMware Workstation has proven to be a very popular article on this blog, if you are a consultant who has to do product demos of VI3/vSphere or are studying for your VCP it’s a very useful thing to be able to do on your own laptop rather than rely on remote connections or lugging around demo kit.

Good news; the RC build of vSphere will boot under the latest VMware Workstation build (6.5.2) without any of the .vmx hackery you had to do in previous versions and it seems quite fast to boot.

Bad news: the RC build of vSphere needs at least 2GB of RAM to boot, this is a problem for a laptop with 4GB of RAM as it means you can only really run one at a time.

Luckily: Duncan Epping (or VCDX 007; licenced to design :)) has discovered how you can hack the startup script to allow it to run in less than 2GB of RAM – details here, this isn’t officially supported – but it does work.

In the interests of science I did some experimentation with VM’s with various amounts of decreasing RAM to see what the bare minimum RAM you can get away with for a VM’d version of vSphere RC.

The magic number seems to be 768Mb of RAM, if you allocate less than this to the VM then it results in a Purple Screen of Death (PSOD) at boot time.

image

Note – this may change for the GA/RTM final version – but these are my findings for RC

The relevant section of my /etc/vmware/init/init.d/00.vmnix file looks like the following (note it won’t actually boot with 512mb assigned to the VM)

image

Some screen captures of the vSphere RC boot process below

image image

And finally the boot screen once it’s finished – it takes 2-3 mins with 768Mb of RAM on my laptop to get to this boot screen.

image

I am doing this on a Dell D620 with 4Gb RAM and Intel VT enabled in the BIOS, running Vista x86 and VMware Workstation v6.5.2 build 156735

image 

image 

I haven’t tried, but I assume I can’t power on VM’s under this instance of vSphere but I can connect them to a vCenter 4 machine and practice with all the management and configuration tools.

Happy tweaking…

Importing vCenter 2.5 Customization Specifications into vCenter 4

 

If you have a lot of customization specifications setup in your vCenter server you are likely to want to copy these to your vSphere/vCenter 4 lab or production system when it’s released, otherwise it’s a bit tedious typing it all in again 🙂

The following steps show how to export and then import your guest customization specifications into vCenter 4 you have to do them one by one as there is no multi-select available.

First, export the settings from the vCenter 2.5 server using the VI client connected to the vCenter 2.5 server (not the ESX host)

image

Then save each one out as an .XML file

image

Then connect your VI client to your vCenter 4 server (not the vSphere host itself) and go to the home view and click on the Customization Specifications Manager icon

image

Then click import and choose the .XML file you exported previously

image

Click OK and it will import the template

if you have encrypted passwords stored in your customization template then you will be prompted to re-enter them (unless you used a real certificate or PKI across both hosts)

image

It will then run you through the guest customization wizard to re-enter the password, but don’t worry all the other settings are retained you only need to re-type the password.

image

Once you’ve been through this process the customization specification is now available for use when you deploy from a template within vCenter 4.

Other than that the overall template process is similar to the VI3 process that I wrote about a while ago here