The BladeCenter H has eight switch bays and two Advanced Management Module (AMM) bays. The two AMM act in much the same way as the Onboard Administrator on HP C Class. There are two for redundancy. Two of the eight switch bays are used for FC Switches, for this project we are using Brocade 4Gb SAN switches.
The other bays are occupied by Cisco GbE Switch Modules.
HS21s are used for the initial phase of the project. These blades can accommodate upto 6 NICs and 2 HBAs, with 2 onboard and the other 4 provided by daughtercards. The customer has elected to use 4 NICs as opposed to the 6 that I normally recommend for ESX implementations. The two extra NICs are provided by the CFFh daughtercard, this daughtercard houses 2 network adapters AND 2 Fibre Channel HBAs.
The table below (from IBM) show the interface to bay mapping.

I will be teaming Interface 0 (eth0) with Interface 3 (eth3) as opposed to the IBM table (dedicating an adapter to a service), as this will team one onboard port with one daughtercard port. Likewise eth1 will then be teamed with eth2.
* The location of the two Fibre Channel Adapters should be Daughter Card CFF-h, not v as shown in the IBM table.
The following diagram shows the correct mapping.

The table below details the network interconnects.

6 comments:
Hugo,
Great post. I can't wait to see the following parts.
Good explanation and illustration of planning the nic redunancy between the onboard and daughter card ports.
I have been recently leaning towards combining all 4 nics in one vswitch though. Unless of course, the design is for iSCSI storage.
Hugo,
Great design! Since the IBM CFFh card is hard coded to 2 HBA's and 2 NIC's, the only place you are really flexible is the CFFv slot. I believe your choice to use NICs in that slot is a very good choice. Will you be using the Cisco or the BNT Ethernet switches?
I look forward to seeing more!
Thanks guys for your feedback, somebody actually reads my blog!
Rich, indeed, 4 nics in a vSwitch is also an option, I tend to veer towards splitting the different traffic up into dedicated pNics though, this helps the hypervisor optimise the type of traffic going through the hardware. This is extremely effective for VDI projects and also for DMZ implementations when the DMZ traffic goes through completely separate copper.
I'm currently working on the project as I type this so I will update the blog with Part 2 when I can.
Aaron, the customer has opted for Cisco.
What is in Part 2?
I'm awaiting the customer's decision to procure the additional CFF-v daughtercard, if they go ahead, I'll show how the design will look with 6 network interfaces, which of course will provide resilient vSwitches for all pNICs. Also in Part 2 is a diagramatic view of the last table in Part 1, and depending on the customer's decision, I may post the 6 pNic diagram too.
Watch this space.
Hugo - Since they will be using Cisco switches you might want to consider utilizing link state tracking. Scott Lowe has written a great article on the subject and I had some follow up items.
Link to Scott's: http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/
Link to mine:
http://bladevault.info/?p=11
Good Luck!
Sorry, Scott's Link ran off the page:
Link State Tracking
Guys,
I've added another post up top.
Have you encountered ESX 3.5 crashing on HS21 XM blades?
Post a Comment