& cplSiteName &

Unified Fabric (UF) – UCS Manager

Light Reading
Series Column
Light Reading

EXECUTIVE SUMMARY: The UCS Manager successfully brought up new cards and replaced failed cards automatically through the unified fabric.

Earlier we underlined the importance of reducing operational costs for cloud operators. Due to the number of components in the data center, its management has historically been quite complex. In recent years several advances such as the adoption of Fibre Channel over Ethernet (FCoE) and the virtualization of servers have helped simplify data center management. Cisco contributed to the simplification of data center operations with the introduction of its Unified Fabric solution -- a system that aggregates server connections and reduces the amount of cabling and resources needed. Unified Fabric is one of the cornerstones in Cisco’s Unified Computing System (UCS), which also includes server blades, fabrics and the focus of this test: UCS Manager.

Simplifying data center operations and reducing the duration of tasks are two ways to control and decrease the operational costs of running a data center. In our test we looked at both aspects. Specifically, we looked at Cisco's UCS service profiles -- a saved set of attributes associated with a blade. The service profile or template is configured once with definition for VLAN pools, World Wide Names (WWNs), MAC addresses and pointers to the appropriate boot disc image sitting in the Storage Area Network (SAN). Once this profile is applied to a blade the operator can expect that the services will be brought up automatically. In the case that a blade failed, the profile could be automatically moved to a new physical blade speeding up the failure recovery. Through the UCS Manager the operator could see the status of the various blades and create the configuration.

We started this test by configuring service profiles. The service profiles are typically stored in the Cisco UCS 6100 Fabric Interconnect, which is running the embedded UCS manager. We then set up two test scenarios. In the first test, we associated a service profile to a slot within in the Cisco UCS 5000 Blade Server Chassis. Out of the eight blades installed in the chassis, six were being used by other applications, one was active and was running the profile we just created, and one blade was completely shut down. We then went down to the lab, and pulled our active blade out of the chassis, and replaced it with a different blade. Before we went down though we initiated ping messages, both to the blade's IP, and to the IP of a VM running on that blade. Our expectation was that the UCS manager would load the same profile to the new blade without our involvement, since the profile was associated to the slot.

We came back to the lab and checked our ping messages. The replacement blade took 595 seconds to boot and respond. The UCS manager had applied the same profile to the new blade, and activated it. The ping to the VM, however, started getting responses after 101 seconds (we sent one ping per second). This was achieved thanks to a VM level failover recovery mechanism that moved the VM to another blade altogether.

In the second test we wanted to see if we could bring up a full chassis of UCS blades through the automatic service profile. We started by emulating the full scenario as much as possible. We put ourselves in the administrator's shoes. We went to the lab and made sure that there were no blades in our chassis, as if we were waiting for them to be shipped. We then created service profiles for each blade, requesting some randomization on the fly to ensure that they were new profiles. After we associated the service profiles to the empty blade slots, we went back down to the lab, pushed the blades in and waited. Before pushing the blades in, we again initiated pings to the first and last of the eight blades. Since the blade's IPs were DHCP based, we had to configure the DHCP server to bind IP addresses to the service profile MAC addresses, which in turn served as another verification point that the service profiles were used.

Indeed, as we saw the blades come up through the management tool, the ping messages started receiving responses. The first of the chosen blades responded to pings after 647 seconds, and the last blade after 704 seconds. The VM we started on the eighth blade was responding to pings when just over thirteen minutes had passed since we began inserting blades. Had the UCS Manger service profiles not been available to us, we would have had to boot the cards; look through our system to see which MAC addresses, VLANs, etc., should be used; connect to each blade physically and configure these variables; ensure that they could reach the SAN and boot from it; and of course debug any issues that came up along the way. The UCS Manager reduced these steps to a simple "verify on the GUI that all blades were up and working," and worked quite smoothly from the get-go.

Next Page: Virtual Machine Fabric Extender Performance
Previous Page: Tiered Cloud Services

Back to the Cisco Test Main Page

(0)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Featured Video
Upcoming Live Events
September 17-19, 2019, Dallas, Texas
October 1-2, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events