Jul 24, 2015

Cisco - UCS - M-Series - Overview Architecture

  • •UCS M-Series was designed to complement the compute infrastructure requirements in the data center
  • •The goal of the M-Series is to offer smaller compute nodes to meet the needs of scale out applications, while taking advantage of the management infrastructure and converged innovation of UCS
  • •By disaggregating the server components, UCS M-Series helps provide a component life cycle management strategy as opposed to a server-by-server strategy
  • •UCS M-Series provides a platform that will provide flexible and rapid deployment of compute, storage, and networking resources


Density Optimized Servers
A common theme among server vendors has been to shrink the server into a smaller consumable unit, but to house it in a chassis that still fits the width of the standard data center. This class of server lets you optimize space in the data center. It is not unlike a blade chassis but at a smaller scale, with less focus on converged infrastructure and management. The goal of these devices is to provide a miniaturized server (cartridge) that is placed in a sheet metal container (chassis) that provides power and cooling, and sometimes aggregation of networking


UCS M4308 Chassis
  • •2U Rack-Mount Chassis (3.5” H x 30.5” L x 17.5” W)
  • •3rdGeneration UCS VIC ASIC (System Link Technology)
  • •1 Chassis Management Controller -CMC (Manages Chassis resources)
  • •8 cartridge slots (x4 PCIe Gen3 lanes per slot)
  • Slots and ASIC adaptable for future use
  • •Four 2.5” SFF Drive Bays (SSD Only)
  • •1 Internal x 8 PCIe Gen3 connection to Cisco 12 SAS RAID card
  • •1 Internal x8 PCIe Gen3 slot, ½ height, ½ Width (future use)
  • •6 hot swappable fans, accessed by top rear cover removal
  • •2 external Ethernet management ports, 1 external serial console port (connected to CMC for out of band troubleshooting)
  • •2 AC Power Module bays -(1+1 redundant, 220V only)
  • •2 QSFP 40Gbps ports (data and management -server and chassis)


But if we examine these servers closely, we see that they are frequently stripped-down versions of the same 1RU server. These smaller servers are typically referred to as cartridges that are housed in a chassis. Each server cartridge still has its own individual storage controller, network controller, blade management controller, and hard drives. These resources are dedicated to that specific server and are not usable by any other server in the chassis. The servers have their own networking, and in cases where the chassis does not provide an aggregation switch, they have to be individually cabled to a top-of-rack switch. In an effort to keep the cost down, these servers are also typically limited in individual networking bandwidth and storage controller capabilities. The concept is a good start, but we are just miniaturizing to help deal with scale. There are many other factors to consider once these servers are commissioned in the data center, such as deployment, operations, and lifecycle management.


Disaggregation of the Server Components
The Cisco UCS M-Series platform takes a different approach. Building on the fundamentals of converged infrastructure, the Cisco UCS M-Series takes advantage of proven Cisco application-specific integrated circuit (ASIC) design and technologies to decouple the networking and storage components of the server cartridge and provide them as flexible, configurable resources that can be distributed as needed to the servers within the chassis. Cisco also decouples the server management controller to an onboard component, along with a shared resource, so discrete add-in cards that are individually managed are not needed. In this model the server is simply the CPU and memory, with standard PCIe connectivity into the chassis resources. The components that are shared in the chassis are now power, management, cooling, storage, and networking.



Cisco System Link Technology
The Cisco UCS platform is built around the concept of a converged fabric that allows flexibility and scalability of resources. The foundation of the Cisco UCS converged architecture has always been the functionality provided by the Cisco UCS Virtual Interface Card (VIC) through a Cisco ASIC. The Cisco technology behind the VIC has been extended in the latest-generation ASIC to provide multiple PCIe buses that connect to multiple servers simultaneously. This third-generation Cisco ASIC provides the System Link Technology, which extends a PCIe bus to each of the servers, creating a virtual device on the PCIe host interface for use by the local CPU complex. The OS sees this as a local PCIe device, and I/O traffic is passed up the host PCIe lanes to the ASIC. From there it is mapped to the appropriate shared resource—the local storage or the networking interface.
This overall technology is not new to Cisco UCS. In fact, it is core to the infrastructure flexibility provided in the Cisco UCS architecture. Those familiar with Cisco UCS know that the VIC allows administrators to configure the appropriate Ethernet or storage interfaces to be provided to the host OS. These are known as virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs) within the construct of Cisco UCS Manager. To the OS, the vNIC and vHBA are seen as PCIe end devices, and the OS communicates with them as it would with any physical PCIe device.

  • •System Link Technology is built on proven Cisco Virtual Interface Card (VIC) technologies
  • •VIC technologies use standard PCIe architecture to present an endpoint device to the compute resources
  • •VIC technology is a key component to the UCS converged infrastructure
  • •In the M-Series platform this technology has been extended to provide access to PCIe resources local to the chassis like storage
  • •Traditional PCIe Cards require additional PCIe slots for additional resources
  • •System Link Technology provides a mechanism to connect to the PCIe architecture of the host
  • •This presents unique PCIe devices to each server
  • •Each device created on the ASIC is a unique PCIe physical function

Virtualized Shared Local Storage
The shared local storage is enabled through two major components. At the server level is the sNIC, which is the virtualized storage controller. At the chassis level is the physical storage controller. These two components are tightly integrated to provide each server with a virtual drive that will be referred to as a logical unit number (LUN) within the Cisco UCS Manager management structure and referenced as a virtual drive by the controller. Virtual drives can be carved out of a RAID drive group that is configured on the physical drives in the M-Series chassis. The Cisco UCS Manager allows for the centralized policy-based creation of this storage and their mapping to the service profiles. The virtual drives presented by these drive groups will be available for consumption by servers installed in the chassis

  • •RAID configuration groups drives together to form a RAID volume
  • •Drive Groups define the operations of the physical drives (RAID level, write back characteristics, stripe size)
  • •Depending on the RAID Level a group could be as small as 1 drive (RAID 0) or in the case of M-Series 4 Drives (R0, R1, R5, R10)
  • •M-Series chassis support multiple drive groups
  • •Drive groups are configured through a new policy setting in UCS Manager

Virtualized Storage Controller
The virtualized storage controller, or sNIC, is a PCIe device presented to the OS. As shown in Figure 8, it is this device that provides the pathway for SCSI commands from the server to the virtual drive. This controller is a new device to the OS and will use a sNIC driver that will be loaded into the OS. Being a new PCIe device, the sNIC driver will not be part of some OS distributions. When that is the case, the sNIC driver will have to be loaded at the time of installation in order to see the storage device on the server. This driver, like the eNIC and fNIC drivers, will be certified by the OS vendor and eventually included as part of the core OS install package. When the driver is present, the virtual drive will be visible to the OS and is presented as a standard hard drive connected through a RAID controller. The driver does not alter the SCSI command set as it is sent to the controller, but instead provides a method for presenting the storage controller to the OS and the appropriate framing to carry the commands to the controller

•The chassis storage components consist of:
  • •Cisco Modular 12 Gb SAS RAID Controller with 2GB Flash
  • •Drive Mid-plane
  • •4 SSD Drives
  • •Same controller used in the C-Series M4 servers
  • •Controller supports RAID 0,1,5,6,10,50,60
  • •Support for up to 4 SSD in chassis drive bays
  • •Support for 6Gb or 12Gb SAS or SATA Drives
  • •NO support for spinning media SSD ONLY (power, heat, performance)
  • •All drives are hot swappable
  • •RAID rebuild functions are supported for failed drives



Building the Complete Server Through System Link Technology
The Cisco UCS platform is built around the concept of a converged fabric that allows flexibility and scalability of resources. The foundation of this architecture has always been the Cisco ASIC technology within the virtual interface cards. The cornerstone of the M-Series platform is a third-generation ASIC based on this same innovative technology. This ASIC enables the System Link Technology that presents the vNIC (Ethernet interfaces) and the sNIC (storage controller) to the OS as a dedicated PCIe device for that server.
In addition to presenting these PCIe devices to the OS, the System Link Technology provides a method for mapping the vNIC to a specific uplink port from the chassis. It also provides quality-of-service (QoS) policy, rate-limiting policy, and VLAN mappings. For the sNIC, the System Link Technology provides a mapping of virtual drive resources on a chassis storage controller drive group to a specific server. This is the core technology that provides flexible resource sharing and configuration in the Cisco UCS M-Series server


Management Considerations
More important than the technology that enables these functionalities is the policy-driven structure that provides the configuration and management of the server entities. Cloud-scale applications require a different type of physical infrastructure, which consists of an exceedingly large number of servers. When dealing with hundreds or thousands of servers, the deployment and operations of those servers become a key component of the overall system architecture. Cisco UCS Manager is used to manage the Cisco UCS M-Series servers and chassis. Cisco UCS Manager uses the underlying technology that enables the disaggregation of the server components to provide policy-based management with an open XML API and a growing ecosystem of strategic partnerships to provide configuration and management of 20 M-Series chassis in a single Cisco UCS Manager domain. Cisco UCS Manager is a common platform for management of Cisco UCS B-Series, C-Series, and M-Series servers.

  • •Version 2.5 M Release at Initial Shipping
  • •M-Series only
  • •Ethernet Only
  • •Support for 2 LUNs per host
  • •Support for 4 vNICsper host
  • •Merge of B, C, and M series into a single build will be in the 3.1(1) or later
  • •UCS Central Manageability planned for the next release after Initial Shipping



....

No comments:

Post a Comment