My ramblings on the stuff that holds it all together
DC02 – Best Practices for Lab Manager (VMworld Europe 2009)
This was an interesting session; I’ve played a bit with Lab Manager but definitely intend to invest more time in it this year, key things for me were;
There are approx 1000 deployments of Lab Manager at customers, a large percentage in Europe.
You need to bear in mind VMFS constraints on the number of allowed hosts when using DRS with Lab Manager, LM typically provisions and de-provisions lots of VMs so size hosts and clusters accordingly. Consider the storage bandwidth/disk groups etc. The self-service element could easily let this get out of control with over-zealous users, implement storage leases to avoid this (use it or loose it!)
Real-life Lab manager implementations have typically been for the following uses;
- Training – I hadn’t personally considered this use-case before but it’s popular
- Demo environments – McAfee using LM to run their online product demo environments, some custom code to expose the VM console outside of VI into a browser.
- Development – VMware make heavy use of Lab manager for their own dev environments, they have build end-end automation via the SOAP API to integrate with smoke test tools and commercial tools like Mercury etc. builds go through automated smoke tests with the whole environment being captured with the bug in-situ and notifications and links sent to the relevant teams for investigation – excellent stuff; would be good to see a more detailed case-study on how this has been built.
Multi-site Lab Manager implementations are tricky – and need manual template copies or localised installations of LM; may be addressed in future releases.
When backing up Lab Manager hosted VMs think about what you are backing up; guest-based backup tools (Symantec/NTBackup etc.) will expand out the data from each VM and will consume extra storage – Lab manager uses Linked-clones so the actual storage used on the VMFS is pretty efficient.
Ideally use SAN based snapshots on the whole VMFS (or disk tree), and not individual VMDK backups – no file/VM granularity but there is a good reason for this; because linked-clones are so inter-dependent you need to backup the whole chain together otherwise you risk consistency issues (maximum number of linked clones is 30)
VMware say there is no real performance penalty for using linked clones, SAN storage processors can cache the linked/differential parts of the VMDK files very efficiently (due to smaller size fitting in cache I guess?)
There is a tool called SSMove which can move virtual disk trees (linked-clone base disk + all children) between VMFS volumes – not Storage vMotion aware; needs downtime to that VM (and it’s children) to carry out.
There is a concept of organizations within Lab Manager which allows you to separate out access between multiple teams accessing the same Lab manager server and infrastructure.
Network Fencing is a useful feature in Lab Manager, it means you have multiple environments running with identical or conflicting IP address spaces; it automatically deploys a virtual appliance which functions as a NAT and router between the environments to keep traffic separate but allow end-user access by automatically NAT’ing inbound connections to the appropriate environment/container.
All in there are some good features being added into Lab Manager but it would be really good to see VMware working with PlateSpin to integrate the two products tighter, out of the box Lab Manager doesn’t have a facility to import physical machines via P2V – VMware are focused on end-end VM lifecycle solutions but PlateSpin could bring a lot to the table by keeping lab copies of physical servers refreshed; and conversely the ability to sync workload (OS/app/data) changes from development systems back out to physical machines (or other hypervisors – more on PlateSpin and it’s X2X facilities in a previous post here).