Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Distributed Power Management (DPM) for your Home Lab


I am in the middle of rebuilding and expanding my vTARDIS home lab environment (look out for an update soon) but as I’m adding more physical vSphere hosts I’ve been looking at ways to reduce the overall power consumption as my lab has now overtaken the idle power consumption of the rest of my house (measured using one of these – get one they are great, and Google Powermeter integration coming soon for online monitoring).

Distributed Power Management (DPM) was 1st introduced in experimental form in ESX 3.5 and has since gone into supported use with vSphere 4.0, it’s an interesting technology that allows you to consolidate workloads within a cluster to as few physical hosts as possible using vMotion/DRS and put the idle hosts into stand-by, thus reducing the overall power consumption. DPM can automatically make them resume when demand increases and use DRS to re-distribute hosts across the cluster – essentially making the physical host layer somewhat elastic.

image image

Whilst maybe production use-cases are more limited as most DC managers hate varying power loads in the datacentre (they are much harder to plan for) I have definitely found a use for it in my lab.

Out of the box, the ML115 g5 (I have only tested this on the AMD quad-core versions) it “just works” using the onboard BMC and doesn’t seem to require the expensive iLO add-on, I assume it’s using Wake on LAN (WoL) magic packets to wake up the hosts – but in my testing it works fine and reliably suspends/resumes hosts as demand changes (your mileage may vary)

The screenshot below shows a 3-node cluster, with 4 running virtual machines (which are actually virtual ESXi hosts, but the principal also applies to normal VMs running on a cluster) note; one host is suspended because the workload is “light”.


If I power on another 4 virtual ESXi hosts, the cluster realises it wants more resource and asks the node in standby mode to start-up.




In my environment it takes approx 3-5 minutes for a host to power back on and be admitted back into the cluster.


Then, DRS will kick in and do it’s thing to balance the VMs across the newly (dynamically) expanded cluster.


If I power down those VMs again (taking the total cluster load to zero VMs)within 5mins it puts 2 of the hosts into stand-by mode again (thus saving the power consumption for 2 hosts)



Even if you don’t want to turn on the automation settings, you can use this feature to remotely power on/off some of your home lab (assuming you have VPN access and more than one host) What impressed me more than anything is that this just worked out of the box with the ML115 G5.

image image

If you want more tips on power-saving with the ML115 range it’s worth checking out this post on Techhead to see what you can do with the more advanced range of CPU settings on a per-host basis.

2 responses to “Distributed Power Management (DPM) for your Home Lab

  1. Simon Gallagher January 22, 2011 at 7:44 pm

    As a footnote to this, I have since discovered that if you have a service-console/vmk NIC attached to a NIC using a dvSwitch it cannot enter standby and DPM via WoL doesn’t work

  2. Pingback: Welcome to vSphere-land! » Home Lab Links

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: