Saturday, 23 February 2008

Er... problem with HS21 XM (7995) and ESX 3.5

This is a bit of an issue. I've just test installed ESX 3.5 onto a HS21 XM (7995) blade BIOS v 1.07, everything is fine and the server boots fine and runs stable but everytime I reboot from the console or restart using VI-Client I get a purple screen of death.

Now I know that there is an issue with quad-core Xeons and HS21 blades, but wasn't this fixed with the latest BIOS versions? I believe it was fixed with BIOS 1.06 on the normal HS21 but was this same fix applied to HS21 XM (7995) v 1.07?

IBM and VMware support tickets have been opened, but any working fixes out there?

Tuesday, 12 February 2008

Does anyone know what this tool is?

It was used in "Fundamentals of Disaster Recovery in Virtualized Environments" VMware World Partner Day 2007.

Any help to indentify would be great!

Tuesday, 5 February 2008

Planning a VMware ESX deployment on IBM BladeCenter H - Part 2

In the previous post I covered the network design for a HS21 with 4 network interfaces. This post will continue with a diagrammatic representation of the interface table.

As described previously, this configuration provides full network fault tolerance on all levels: adapter, port, CAT5, switch bay and core switch.


Put your finger over any individual constituent part, i.e., pNic, interface, bay switch or core switch, to simulate a failure and there will always be an alternative path.

I'm waiting for the customer to decide on whether to include the CFFv daughtercard in this phase of the project, and will update this post with the new design if required.

Next up, environmentals...

Those of you familiar with the HP c-Class blades will probably know that there is a superb tool called the HP BladeSystem PowerSizer 2.9, I've been trying to find an equivalent from IBM, but as yet have not found anything that comes as close. (Any pointers will be appreciated)

Instead I've had to resort to using data obtained from The Edison Group study titled Blade Server Power Study - IBM BladeCenter and HP BladeSystem, Nov 7 2007, document titled "BLL03002USEN.pdf".

The results show, in summary a BladeCenter H chassis with 14 blades on full load will need 14,352.51 BTU/Hr with a peak power consumption of 4,208.80 Watts. Most modern datacenters with good power feeds will be able to accommodate that kind of load. Cooling requirements will be left to the customer to calculate.

Additionally, this single chassis will require 9 rack units and 4 power feeds due to the additional 2900W power supply modules.

Part 2.... Continued..

Thank you Aaron for your help with the power sizer.

Here is the output from the tool (not as nice as HP's offerring by the way)














In the next part... network design for the x3650.

Saturday, 2 February 2008

Planning a VMware ESX deployment on IBM BladeCenter H - Part 1

Well here I am, starting a new project for a new customer at a new datacenter again. This time, its a large retail organisation looking to do the usual, consolidate, virtualise, go green etc etc. They have selected IBM System X and BladeCenter H as the platforms of choice for the new VMware ESX 3 environment. So here we go with the planning....

The BladeCenter H has eight switch bays and two Advanced Management Module (AMM) bays. The two AMM act in much the same way as the Onboard Administrator on HP C Class. There are two for redundancy. Two of the eight switch bays are used for FC Switches, for this project we are using Brocade 4Gb SAN switches.

The other bays are occupied by Cisco GbE Switch Modules.

HS21s are used for the initial phase of the project. These blades can accommodate upto 6 NICs and 2 HBAs, with 2 onboard and the other 4 provided by daughtercards. The customer has elected to use 4 NICs as opposed to the 6 that I normally recommend for ESX implementations. The two extra NICs are provided by the CFFh daughtercard, this daughtercard houses 2 network adapters AND 2 Fibre Channel HBAs.

The table below (from IBM) show the interface to bay mapping.

Since only 4 interfaces are available, teaming and VLANs will have to be used to provide resilience and to separate the SC and VMKernel networks.

I will be teaming Interface 0 (eth0) with Interface 3 (eth3) as opposed to the IBM table (dedicating an adapter to a service), as this will team one onboard port with one daughtercard port. Likewise eth1 will then be teamed with eth2.

* The location of the two Fibre Channel Adapters should be Daughter Card CFF-h, not v as shown in the IBM table.

The following diagram shows the correct mapping.














The table below details the network interconnects.

Interface is the network adapter inside a blade, Location is where the interface is, Chassis Bay is where the interface terminates at the rear of the BladeCenter chassis, pSwitch is the external core switch that the Chassis Bay uplinks to, vSwitch is the ESX virtual switch that the Interface provides an uplink for, vLAN is the ID that is assigned to each Port Group and Service is the type of port group assigned to a vSwitch.