Saturday, 2 February 2008

Planning a VMware ESX deployment on IBM BladeCenter H - Part 1

Well here I am, starting a new project for a new customer at a new datacenter again. This time, its a large retail organisation looking to do the usual, consolidate, virtualise, go green etc etc. They have selected IBM System X and BladeCenter H as the platforms of choice for the new VMware ESX 3 environment. So here we go with the planning....

The BladeCenter H has eight switch bays and two Advanced Management Module (AMM) bays. The two AMM act in much the same way as the Onboard Administrator on HP C Class. There are two for redundancy. Two of the eight switch bays are used for FC Switches, for this project we are using Brocade 4Gb SAN switches.

The other bays are occupied by Cisco GbE Switch Modules.

HS21s are used for the initial phase of the project. These blades can accommodate upto 6 NICs and 2 HBAs, with 2 onboard and the other 4 provided by daughtercards. The customer has elected to use 4 NICs as opposed to the 6 that I normally recommend for ESX implementations. The two extra NICs are provided by the CFFh daughtercard, this daughtercard houses 2 network adapters AND 2 Fibre Channel HBAs.

The table below (from IBM) show the interface to bay mapping.

Since only 4 interfaces are available, teaming and VLANs will have to be used to provide resilience and to separate the SC and VMKernel networks.

I will be teaming Interface 0 (eth0) with Interface 3 (eth3) as opposed to the IBM table (dedicating an adapter to a service), as this will team one onboard port with one daughtercard port. Likewise eth1 will then be teamed with eth2.

* The location of the two Fibre Channel Adapters should be Daughter Card CFF-h, not v as shown in the IBM table.

The following diagram shows the correct mapping.














The table below details the network interconnects.

Interface is the network adapter inside a blade, Location is where the interface is, Chassis Bay is where the interface terminates at the rear of the BladeCenter chassis, pSwitch is the external core switch that the Chassis Bay uplinks to, vSwitch is the ESX virtual switch that the Interface provides an uplink for, vLAN is the ID that is assigned to each Port Group and Service is the type of port group assigned to a vSwitch.

6 comments:

Rich said...

Hugo,

Great post. I can't wait to see the following parts.

Good explanation and illustration of planning the nic redunancy between the onboard and daughter card ports.

I have been recently leaning towards combining all 4 nics in one vswitch though. Unless of course, the design is for iSCSI storage.

Aaron Delp said...

Hugo,
Great design! Since the IBM CFFh card is hard coded to 2 HBA's and 2 NIC's, the only place you are really flexible is the CFFv slot. I believe your choice to use NICs in that slot is a very good choice. Will you be using the Cisco or the BNT Ethernet switches?

I look forward to seeing more!

- said...

Thanks guys for your feedback, somebody actually reads my blog!

Rich, indeed, 4 nics in a vSwitch is also an option, I tend to veer towards splitting the different traffic up into dedicated pNics though, this helps the hypervisor optimise the type of traffic going through the hardware. This is extremely effective for VDI projects and also for DMZ implementations when the DMZ traffic goes through completely separate copper.

I'm currently working on the project as I type this so I will update the blog with Part 2 when I can.

Aaron, the customer has opted for Cisco.

What is in Part 2?

I'm awaiting the customer's decision to procure the additional CFF-v daughtercard, if they go ahead, I'll show how the design will look with 6 network interfaces, which of course will provide resilient vSwitches for all pNICs. Also in Part 2 is a diagramatic view of the last table in Part 1, and depending on the customer's decision, I may post the 6 pNic diagram too.

Watch this space.

Aaron Delp said...

Hugo - Since they will be using Cisco switches you might want to consider utilizing link state tracking. Scott Lowe has written a great article on the subject and I had some follow up items.

Link to Scott's: http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/

Link to mine:
http://bladevault.info/?p=11

Good Luck!

Aaron Delp said...

Sorry, Scott's Link ran off the page:


Link State Tracking

- said...

Guys,

I've added another post up top.

Have you encountered ESX 3.5 crashing on HS21 XM blades?