Virtualization, Cloud, Infrastructure and all that stuff in-between

My ramblings on the stuff that holds it all together

Category Archives: Virtual Connect

Resources for HP c-class blade and EVA Design for vSphere 4


I am currently working on a design for a vSphere 4  platform on HP’s EVA SAN and c-class blade chassis. In order to provide flexible network connectivity we are leveraging the new Flex 10 Virtual Connect Modules as well as VC Fibre Channel modules to simplify administration

Because finding things on the HP site can sometimes be a bit hit & miss, this post serves as a bookmark to the more useful resources I found.

Hardware Configurator – Generate Bill of Materials (BoM)

HP eConfigurator online tool to configure and cost blades and chassis options and produce a validated bill of materials – be sure to select your country to ensure you get the correct power options and list prices



Virtual Connect

HP Virtual Connect cookbook – updated for Flex 10 (Feb 2010)

Virtual Connect Webinar series

How does a Virtual Connect FC Module work? (warning – old and outdated with current firmware)


Flex10 Links

Virtualised Reality (Barry Coombs)


EVA Storage

HP EVA User Guide

Best Practices for HP Storageworks EVA with vSphere [Whitepaper]

Best Practices for HP StorageWorks Enterprise Virtual Array with VMware vSphere 4 [WEBINAR]


vSphere Installation

HP-Specific ESXi Installable Download (HP Passport Account Required)



HP Power Calculator Spreadsheets (BL, DL, PL, EVA) in .xls format (Office 2010 users need to “Enable Editing” to take it out of protected mode in order for the links to work

image image

HP Blade Power Sizing Utility (can be a bit buggy and slow – but works) – and can export in a number of different formats including Word (Example Doc)



Firmware Maintenance CD Download

Links Updated & Section reorganised 23rd Feb 2011

HP Virtual Connect Technical Webinar Series


HP are running a free series of technical webinars around their virtual connect technology, if like me you are trying to get your head around the VC technology, this is for you.

Visit the following URL and sign-up, there are a range of free local dial-in numbers for the audio – note, I couldn’t get it to work in Firefox so you may need to use IE like I did.

Seems that there was a timezone/daylight saving problem between the US and Europe for the 1st in the series which is being repeated now.

The sessions are being recorded and will be available online to replay at

A quick peek at the Flex-10 session is shown below, I’ve not seen a marketing/RoI slide yet so looks good to me 🙂


How does an HP Fibre Channel Virtual Connect Module Work?


Techhead and I have spent a lot of time recently scratching our heads over how and where fibre channel SAN connections go in a c7000 blade chassis.

If you don’t know, a FC-VC module looks like this, and you install them in redundant pairs in adjacent interconnect bays at the rear of the chassis.


You then patch each of the FC Ports into a FC switch.

The supported configuration is one FC-VC Module to 1 FC switch (below)


Connecting one VC module to more than one FC switch is unsupported (below)


So, in essence you treat each VC module as terminating all HBA Port 1’s and the other FC-VC module as terminating all HBA Port 2’s.

The setup we had:

  • A number of BL460c blades with dual-port Qlogic Mezzanine card HBAs.
  • HP c7000 Blade chassis with 2 x FC-VC modules plugged into interconnect bay 3 & 4 (shown below)

image image

The important point to note is that whilst you have 4 uplinks on each FC-VC module that does not mean you have 2 x 16Gb/s connection “pool or trunk” that you just connect into.

Put differently if you unplug one, the overall bandwidth does not drop to 12Gb/s etc. it will disconnect a single HBA port on a number of servers and force them to failover to the other path and FC-VC module.

It does not do any dynamic load balancing or anything like that – it is literally a physical port concentrator which is why it needs NPIV to pass through the WWN’s from the physical blade HBAs.

There is a concept of over-subscription, in the Virtual Connect GUI that’s managed by setting the number of uplink ports used.

Most people will probably choose 4 uplink ports per VC module, this is 4:1 oversubscription, meaning each FC-VC port (and there are 4 per module) has 4 individual HBA ports connected to it, if you reduce the numeber of uplinks you increase the oversubscription (2 uplinks = 8:1 oversubscription,  1 uplink = 16:1 oversubscription)


Which FC-VC Port does my blade’s HBA map to?

The front bay you insert your blade into determines which individual 4Gb/s port it maps to and shares with other blades) on the FC-VC module, its not just a virtual “pool” of connections, this is important when you plan your deployment as it can affect the way failover works.

the following table is what we found from experimentation and a quick glance at the “HP Virtual Connect Cookbook” (more on this later)

FC-VC Port Maps to HBA in Blade Chassis Bay, and these ports are also shared by..
Bay3-Port 1, Bay-4-Port 1 1,5,11,15
Bay3-Port 2, Bay-4-Port 2 2,6,12,16
Bay3-Port 3, Bay-4-Port 3 3,7,9,13
Bay3-Port 4, Bay-4-Port 4 4,8,10,14


Each individual blade has a dual port HBA, so for example the HBA within the blade in bay 12 maps out as follows

HBA Port 1 –> Interconnect Bay 3, Port 2

HBA Port 2 –> Interconnect Bay 4, Port 2


Looking at it from a point of a single SAN attached Blade – the following diagram is how it all should hook together


 Path Failover

Unplugging an FC cable from bay 3, port 4 will disconnect one of the HBA imageconnections to all of the blades in bays 4,8,10 and 14 and force the blade’s host OS to handle a failover to its secondary path via the FC-VC module in bay 4.


A key take away from this is that your blade hosts still need to run some kind of multi-pathing software, like MPIO or EMC PowerPath to handle the failover between paths – the FC-VC modules don’t handle this for you.



FC Loading/Distribution

A further point to take away from this is that if you plan to fill your blade chassis with SAN attached blades, each with an HBA connected to a pair of FC-VC modules then you need to plan your bay assignment carefully based on your server load.

Imagine if you were to put heavily used SAN-attached VMWare ESX Servers in bays 1,5,11 and 15 and lightly used servers in the rest of the bays then you will have a bottleneck as your ESX blades will all be contending with each other for a single pair of 4Gb/s ports  (one on each of the FC-VC modules) whereas if you distributed them into (for example) bays 1,2,3,4 then you’ll spread the load across individual 4Gb/s FC ports.

Your approach of course may vary depending on your requirements, but I hope this post has been of use.


There is a very, very useful document from HP called the HP Virtual Connect Fibre Channel Cookbook that covers all this in great detail, it doesn’t seem to be available on the web and the manual and online documentation don’t seem to have any of this information, if you want a copy you’ll need to contact your HP representative and ask for it.

VLAN Tagging with ESX and HP Blade Virtual Connect Modules


Useful reference articles here, here and clarification from Scott here.

Looking like I’ll be doing a lot with the HP C-Class Blade chassis this year, so this is useful.