Overview

This chapter contains the following topics:

System Overview

The Cisco UCS X9508 Server Chassis and its components are part of the Cisco Unified Computing System (UCS). This system can use multiple server chassis configurations along with the Cisco UCS Fabric Interconnects to provide advanced options and capabilities in server and data management. The following configuration options are supported:

  • All Cisco UCS compute nodes. In a compute node-only configuration, two Intelligent Fabric Modules (IFMs) are required.

  • A mix of Cisco UCS compute nodes and Cisco UCS PCI Nodes. In this configuration, the compute nodes are paired 1:1 with Cisco UCS PCIe nodes, such as the Cisco UCS X440p PCIe Node. Two Intelligent Fabric Modules (IFMs) and two Cisco X9416 X-Fabric Modules (XFMs) are required.

All servers, compute, and PCIe nodes are managed through the GUI or API with Cisco Intersight.

The Cisco UCS X9508 Server Chassis system consists of the following components:

  • Chassis versions:

    • Cisco UCS X9508 server chassis–AC version

  • Intelligent Fabric Modules (IFMs), two deployed as a pair:

    • Cisco UCS 9108 100G IFMs (UCSX-I-9108-100G)—Two I/O modules, each with 8 100 Gigabit QSFP28 optical ports

    • Cisco UCS 9108 25G IFMs (UCSX-I-9108-25G)—Two I/O modules, each with 8 25 Gigabit SFP28 optical ports

  • X-Fabric Modules (UCSX-F-9416)—Two XFMs are required in each UCS X9508 server chassis to support GPU acceleration through Cisco UCS X440p PCIe nodes.

  • Power supplies—Up to six 2800 Watt, hot-swappable power supplies

  • Fan modules—Four hot-swappable fan modules

  • Up to 8 UCS X Series compute nodes, including the Cisco UCS X210c M6 compute nodes (UCSX-210C-M6), a compute node that contains one or two CPUs and up to six hard drives. For information about the compute node, go to the Cisco UCS X210c M6 Compute Node Installation and Service Note.

  • Up to 4 UCS X-Series compute nodes paired 1:1 with up to 4 Cisco UCS X-Series PCIe nodes, including the Cisco UCS X440p PCIe Node. This configuration requires two Cisco UCS X9416 X-Fabric Modules regardless of the number of PCIe nodes installed. For information about the PCIe node, go to the Cisco UCS X440p PCIe Node Installation and Service Guide.

The following figures show the server chassis front and back.
Figure 1. Cisco UCS X9508 Server Chassis, Front

1

System LEDs:

  • Locator LED/Button

  • System Status LED

  • Network Link LED

For information about System LEDs, see LEDs.

2

Node Slots, a total of 8.

Shown populated with compute nodes, but can also contain PCIe Nodes

3

Power Supplies, a maximum of 6.

4

System Asset Tag

5

System side panels (two), which are removable. The side panels cover the rack mounting brackets.

Figure 2. Cisco UCS X9508 Server Chassis, Rear

1

Power Entry Modules (PEMs) for facility inlet power

Each PEM contains 3 IEC 320 C20 inlets.

  • PEM 1 is at the top of the chassis, and it supports IEC inlets 1 through 3, with inlet 1 at the top of PEM 1.

  • PEM 2 is at the bottom of chassis, and it supports IEC inlets 4 through 6, with inlet 4 at the top of PEM 2

2

Intelligent Fabric Modules (shown populated), which are always deployed as a pair of the following:

  • Cisco UCS 9108 100G modules

  • Cisco UCS 9108 25G modules

3

System fans (four)

4

X-Fabric Module slots for either UCS active filler panels (for compute nodes) or up to two UCS X-Fabric Modules (for compute nodes paired with PCIe nodes).

Features and Benefits

The Cisco UCS X9508 server chassis revolutionizes the use and deployment of compute-node and PCIe-node based systems. By incorporating unified fabric, cloud native management, and X-Fabric technology, the Cisco Unified Computing System enables the chassis to have fewer physical components, no independent management, and to be more energy efficient than traditional blade server chassis.

This simplicity eliminates the need for dedicated chassis management and blade switches, reduces cabling, and enables the Cisco Unified Computing System to scale to 20 chassis without adding complexity. The Cisco UCS X9508 server chassis is a critical component in delivering the Cisco Unified Computing System benefits of data center simplicity and IT responsiveness.

Table 1. Features and Benefits

Feature

Benefit

Management by Cisco Intersight

Reduces total cost of ownership by removing management modules from the chassis, making the chassis stateless.

Provides a single, highly available cloud-based management tool for all server chassis, IFMs, xFMs, and nodes, thus reducing administrative tasks.

Unified fabric

Decreases TCO by reducing the number of network interface cards (NICs), host bus adapters (HBAs), switches, and cables needed.

Support for two UCS I/O Modules

Eliminates switches from the chassis, including the complex configuration and management of those switches, allowing a system to scale without adding complexity and cost.

Allows use of two I/O modules for redundancy or aggregation of bandwidth.

Auto discovery

Requires no configuration; like all components in the Cisco Unified Computing System, chassis are automatically recognized and configured by Cisco Intersight.

Direct node to fabric connectivity

Provides reconfigurable chassis to accommodate a variety of form factors and functions, which supports investment protection for new fabrics and future compute and PCIe nodes.

Provides IFM-to-compute node connectivity to chassis through an Ortho-Direct connection.

Provides 8 nodes with 200 Gbps (dual 25G-PAM4-ETH x8 lanes) of available Ethernet fabric throughput for each compute node. The system is designed to support higher potential Ethernet fabric throughput for future and emerging technologies, such as 112 GbpsPAM4 Ethernet.

Provides 8 nodes with 200 Gbps (dual 16G-PCIe x 16 lanes) of available PCIe fabric throughput for each compute node. The system is designed to support higher potential Ethernet fabric throughput for future and emerging technologies, such as 32 Gbps PCIe Gen5.

Redundant hot swappable power supplies and fans

Provides high availability in multiple configurations.

Increases serviceability.

Provides uninterrupted service during maintenance.

Available configured for AC environments (mixing not supported)

Hot-pluggable compute nodes and intelligent fabric modules

Provides uninterrupted service during maintenance and server deployment.

Comprehensive monitoring

Provides extensive environmental monitoring on each chassis

Allows use of user thresholds to optimize environmental management of the chassis.

Efficient front-to-back airflow

Helps reduce power consumption and increase component reliability.

Tool-free installation

Requires no specialized tools for chassis installation.

Provides mounting rails for easy installation and servicing.

Node configurations

Allows up to 8 UCS compute nodes or up to 4 compute nodes paired with 4 UCS PCIe nodes

Chassis Components

This section lists an overview of chassis components.

Cisco UCS X9508 Server Chassis

The Cisco UCS X9508 Series server chassis is a scalable and flexible chassis for today’s and tomorrow’s data center that helps reduce total cost of ownership.

The chassis is seven rack units (7 RU) high and can mount in an industry-standard 19-inch rack with square holes for use with cage nuts or round-holes for use with spring nuts. The chassis can house up to eight Cisco UCS nodes.

Up to six hot-swappable AC power supplies are accessible from the front of the chassis. These power supplies can be configured to support nonredundant, N+1 redundant, N+2 redundant, and grid-redundant configurations. The rear of the chassis contains four hot-swappable fans, six power connectors (one per power supply), two horizontal top slots for Intelligent Fabric Modules (IFM1, IFM2), and two additional horizontal bottom slots for X-Fabric modules (XFM1, XFM2).

Scalability is dependent on both hardware and software. For more information, see the appropriate UCS software release notes.

Compute Nodes

The Cisco UCS X Series compute nodes are based on industry-standard server technologies and provide the following:

  • Up to two Intel multi-core processors

  • Front-accessible, hot-swappable NVMe drives or solid-state disk (SSD) drives

  • Depending on the compute node, support is available for up to two adapter card connections for up to 200 Gbps of redundant I/O throughput

  • Industry-standard double-data-rate 4 (DDR4) memory

  • Remote management through an integrated service processor that also executes policy established in Cisco Intersight cloud-based server management.

  • Local keyboard, video, and mouse (KVM) and serial console access through a front console port on each compute node

Cisco UCS X210c M6 Compute Node

The Cisco UCS X210c M6 is a two-socket compute node that hosts a maximum of two M6 CPUs. This compute node is supported in the Cisco UCS X9508 server chassis, which provides power and cooling. Data interconnect for the compute node to other data center equipment is supported through Intelligent Fabric Modules in the same server chassis.

Each Cisco UCS X210c M6 compute node has Cisco-standard indicators on the face of the module. Indicators are grouped for module-level information, and drive-level indicators.

Figure 3. Cisco UCS X210c M6 Compute Node

Intelligent Fabric Modules

The Cisco UCS X9508 contains Intelligent Fabric Modules (IFMs) on the rear of the server chassis. IFMs have multiple functions in the server chassis:

  • Data traffic: IFMs support network-level communication for traditional LAN and SAN traffic as well as aggregating and disaggregating traffic to and from individual compute nodes.

  • Chassis health: IFMs monitor common equipment in the server chassis, such as fan units, power supplies, environmental data, LED status panel, and so on. Management functions for the common equipment is supported through IFMs.

  • Compute Node health: IFMs monitor Keyboard-Video-Mouse (KVM) data, Serial over LAN (SoL) data, and IPMI data for the compute nodes in the chassis, as well as provide management of these features.

IFMs must always be deployed in pairs to provide redundancy and failover to safeguard system operation.

Cisco UCS 9108 25G Intelligent Fabric Module

The Cisco UCS 9108 Intelligent Fabric Module (UCSX-I-9108-25G) is an IFM that supports aggregate data throughput of 2TB/s through two groups of four optical ports.

Figure 4. UCS 9108 25 Gbps Intelligent Fabric Module, Faceplate View

1

Status LEDs:

  • IFM Status (top LED)

  • Fan Status LEDs 1 through 3, with Fan 1 as LED 2, Fan 2 as LED 3, and Fan 3 as LED 4.

2

IFM Reset Button

3

SFP28 Optical Ports

Ports are arranged in two groups of four physical ports:

  • Ports are in groups of four. Port number 1 is the left port in this group, and port number 4 is the right port in the group.

  • Ports are in groups of four. Port number 5 is the left port in this group, and port number 8 is the right port in the group.

4

IFM Ejector Handles, left and right


Note


For information about removing and installing the IFM's components, see Cisco UCS 9108 25G IFM Field Replaceable Unit Replacement Instructions.


Cisco UCS 9108 100G Fabric Module

The Cisco UCS 9108 Intelligent Fabric Module (UCSX-I-9108-100G) is an IFM that supports data throughput of 100G through two groups of 4 ports.

Figure 5. UCS 9108 100 Gbps Intelligent Fabric Module, Faceplate View

1

Status LEDs:

  • IFM Status (top LED)

  • Fan Status LEDs 1 through 3, with Fan 1 as LED 2, Fan 2 as LED 3, and Fan 3 as LED 4.

2

IFM Reset Button

3

QSFP28 Optical Ports.

Ports are arranged in two groups of four physical ports. Ports are stacked in vertical pairs, with two ports in each vertical port stack.

  • Port number 1 is the top port in the left port pair in the first port group, and port number 3 is the top port of the right port pair in the group.

  • Port number 5 is the top port in the left port pair of the second group, and port number 7 is the top port in the right port pair of the group.

4

IFM Ejector Handles, left and right


Note


For information about removing and installing the IFM's components, see Cisco UCS 9108 100G IFM Field Replaceable Unit Replacement Instructions.


X-Fabric Modules

The Cisco UCS X9508 server chassis supports Cisco X-Fabric Modules, including the Cisco UCS X9416 X-Fabric Module (XFM).

The module is a configuration option:

  • The X-Fabric modules are required when the server chassis contains the Cisco UCS X440p PCIe node

  • The X-Fabric module is not required if your server chassis contains only Cisco UCS X-Series compute nodes, such as the Cisco UCS X210c.


Caution


Although Cisco UCS X-Fabric Modules can be removed, it is a best practice to leave them installed even during installation. If your Cisco UCS X9508 server is configured so that no XFMs are installed, only XFM blanks, leave the blanks installed also, even during chassis installation.


X-Fabric Modules are always deployed in pairs to support GPU acceleration through the Cisco UCS X440p PCIe nodes. Therefore, two PCIe modules must be installed in a server chassis that contains any number of PCIe nodes.


Caution


Do not operate the server chassis with the XFM slots empty!


Each server chassis supports two UCS X9416 modules, which are located in the two horizontal module slots at the bottom of the chassis rear.

1

XFM slot 1 (XFM1)

Provides PCIe connectivity to all module slots 1 through 8

2

XFM slot 2 (XFM2)

Provides PCIe connectivity to all module slots 1 through 8

For additional information, see the following topics:

Cisco UCS X9416 Fabric Module

The Cisco UCS X9416 module is a Cisco X-Fabric Module (XFM) that provides PCIe connectivity for module slots one through eight on the front of the server chassis. Each X-Fabric Module is installed in the bottom two slots of the rear of the Cisco UCS X9508 server chassis.


Caution


Although the Cisco UCS X9416 Fabric Modules can be removed, it is a best practice to leave them installed even during chassis installation.


Each module provides:

  • integrated, hot-swappable active fans for optimal cooling

  • PCIe x16 connectivity and signaling between pairs of compute nodes and GPU modules, such as the Cisco X440p PCIe node

Each module has STATUS LEDs to visually indicate operational status the X-Fabric module and its fans.

1

Status LEDs:

  • Module Status (top LED)

  • Fan Status LEDs 1 through 3, with Fan 1 as LED 2, Fan 2 as LED 3, and Fan 3 as LED 4.

2

Module Ejector Handles, Left and Right


Note


For information about removing and installing the XFM's components, see Cisco UCS X9416 X-Fabric Module Field Replaceable Unit Replacement Instructions.


Cisco UCS X-Fabric Module Blanks

The Cisco UCSX-9508-RBLK is Cisco UCS X-Fabric Module Blank slot which is used for providing future X-Fabric connectivity. Currently this module blank has active fans to facilitate airflow, and it is often called the Active Fan Module (AFM).

In a typical configuration, this module blank can be installed in either of the two bottom slots in the rear of the chassis below the IFM slots.


Caution


If your Cisco UCS X9508 server is configured so that no XFMs are installed, only XFM blanks, leave the blanks installed even during chassis installation.


Figure 6. UCS X9508 Rear Module Blank (AFM), Faceplate View

1

Status LEDs:

  • Module Status (top LED)

  • Fan Status LEDs 1 through 3, with Fan 1 as LED 2, Fan 2 as LED 3, and Fan 3 as LED 4.

2

Module Ejector Handles, Left and Right


Note


For information about removing and installing the XFM's components, see Cisco UCS 9508 Active Fan Module (AFM) Field Replaceable Unit Replacement Instructions.


Fan Modules

The chassis contains 4 fan modules, with the minimum configuration of 4 fan modules for optimal cooling. Fans draw air in through the front of the chassis (the cool aisle) and exhaust air through the back of the chassis (the hot aisle)

Fans are located in the middle of the server chassis rear panel. Fans are numbered one to four starting with the leftmost fan.

Figure 7. Fan Module

Power Supplies

The chassis supports up to 6 AC power supplies (PSUs), with the minimum configuration of 2 PSUs required. They are Titanium certified 2800W capable AC Power Supply Units (PSUs) that support input power from AC sources.

PSUs are redundant and load-sharing and can be used in the following power modes:

  • N+1 power supply configuration, where N is the number of power supplies required to support system power requirements

  • N+2 power supply configuration, where N is the number of power supplies required to support system power requirements

  • Grid configuration, which is also known as N+N power supply configuration, in which N is the amount of power supplies required to support the system power requirements.


Note


The chassis requires a minimum of two PSUs to operate.


Figure 8. AC Power Supply

To determine the number of power supplies needed for a given configuration, use the Cisco UCS Power Calculator tool.

LEDs

One LED indicates power connection presence, power supply operation, and fault states. See Interpreting LEDs for details.

Buttons

There are no buttons on a power supply.

Connectors

The AC power connections are at the rear of the chassis on the Power Entry Module (PEM) to support AC input from the facility. The chassis has two PEMs (PEM 1 and PEM 2), and each supports 3 power supplies.

  • PEM 1 supports PSUs 1, 2, and 3.

  • PEM 2 supports PSUs 4, 5, and 6.

Each of the six hot-swappable power supplies is accessible from the front of the chassis. These power supplies are Titanium efficiency, and they can be configured to support non-redundant, N+1 redundant, N+2 redundant, and grid-redundant configurations.

Power Supply Configuration

When considering power supply configuration, you need to take several things into consideration:

  • AC power supplies are all single phase and have a single input for connectivity to its respective PEM. The customer power source (a rack PDU or equivalent) connects input power directly to the chassis power entry module (PEM), not the actual AC power supplies.

  • The number of power supplies required to power a chassis varies depending on the following factors:

    • The total "Maximum Draw" required to power all the components configured within that chassis—such as intelligent fabric modules (IFMs), fans, compute nodes (CPU and memory configuration of the compute nodes).

    • The Desired Power configuration for the chassis. The chassis supports non-redundant power supply configuration, N+1 power supply configuration, N+2 power supply configuration, and grid power supply configuration, which is also known as N+N redundancy.

  • When connecting the chassis to facility power, make sure not to overload the capacity of a PDU or power strip, for example, by connecting all PSUs to one PDU or power strip that is not capable of carrying the total power draw of the chassis.

Non-Redundant Mode

In non-redundant mode, the system may go down with the loss of any supply or power grid associated with any particular chassis. We do not recommend operating the system in non-redundant mode in a production environment.

To operate in non-redundant mode, each chassis should have at least two power supplies installed. Supplies that are not used by the system are placed into standby. The supplies that are placed into standby depends on the installation order (not on the slot number). The load is balanced across active power supplies, not including any supplies in standby.

The chassis requires a minimum of 2 power supplies. In cases of low-line operation, the total available power is 1400W each for a total of 2800W. Do not attempt to run the chassis on less than the minimum number of power supplies.

Any power supplies that are unused can be put into standby mode, but also not installed in the chassis, if you choose.


Note


In a non-redundant system, power supplies can be in any slot. Installing less than the required number of power supplies results in undesired behavior such as compute node shutdown. Installing more than the required amount of power supplies may result in lower power supply efficiency. At a minimum, this mode will require two power supplies.


N+1 Power Supply Configuration

In an N+1 configuration, the chassis contains a total number of power supplies to satisfy system power requirements, plus one additional power supply for redundancy.


Note


In an N+1 configuration, a maximum power of 14kW is delivered with five PSUs configured as Active while the remaining one PSU is in standby mode. The 14kW maximum delivered power is only possible at high input voltage range (200-240VAC). In low input voltage range (100-127VAC nominal), the maximum delivered power would be 7kW.


N+1 configuration is configured when:

  • Of the six total PSUs that are participating in N+1 configuration, five are turned on and configured to operate in Active mode

  • All five active PSUs equally share the power load for the chassis.

  • The remaining PSU is turned on and configured to provide Standby power to the chassis so that the power supply can take over operation if one power supply should fail, as long the number of operating power supplies does not drop below the required minimum.

If one Active power supply should fail, the surviving supplies can provide power to the chassis, until the Standby power supply can be switched to Active status. In addition, Cisco Intersight turns on any "turned-off" power supplies to bring the system back to N+1 status. The system will continue to operate, giving you a chance to replace the failed power supply.

N+2 Power Supply Configuration

In an N+2 configuration, the chassis contains a total number of power supplies to satisfy system power requirements, plus two additional power supplies for redundancy.


Note


In N+2 redundant mode, a maximum power load of 11.2KW is supported with four active modules. The 11.2KW maximum power load is only possible at high input voltage range (200-240VAC). In low input voltage range (100-127VAC nominal), the maximum delivered power would be 5.6KW.


An N+2 configuration occurs when:

  • Of the six total PSUs that are participating in the N+2 configuration, four are turned on and configured to operate in Active mode

  • All four active PSUs equally share the power load for the chassis.

  • The remaining two PSUs are turned on and configured to provide Standby power to the chassis so that the power supplies can take over operation if two power supplies should fail, as long the number of operating power supplies does not drop below the required minimum.

If one or two power supplies should fail, the surviving supplies can provide power to the chassis. In addition, the Cisco Intersight interface supports turning on any "turned-off" power supplies to bring the system back to N+2 status.

Grid Configuration

With grid power configuration (also called N+N redundancy), each set of three PSUs has its own input power circuit, so each set of PSUs is isolated from any failures that might affect the other set of PSUs. If one input power source fails, causing a loss of power to three power supplies, the surviving power supplies on the other power circuit continue to provide power to the chassis.


Caution


Grid redundant mode requires the chassis load to be limited to 8.4KW for high input voltage range (200-240VAC) and 4.2KW for low input power range for a maximum grid configuration (3+3). For a 2+2 minimum configuration, the chassis load is limited to 5.6KW for high line input voltage and 2.8KW for low line input voltage.


Grid redundant mode is configured when:

  • all six PSUs are in Active mode to provide power

  • two sets of three PSUs are each connected to separate facility input power sources, including separate cabling for each set

  • For grid redundant mode, the total number of PSUs should always be divided equally. So, a grid power configuration supports 3+3 (maximum configuration per input power source) or 2+2 (minimum configuration per power input source).

The grid power configuration is sometimes used when you have two separate facility input power sources available to a chassis. A common reason for using this power supply configuration is if the rack power distribution is such that power is provided by two PDUs and you want redundant protection in the case of a PDU failure.

LEDs

LEDs on both the chassis and the modules installed within the chassis identify operational states, both separately and in combination with other LEDs.

LED Locations

The UCS X9508 server chassis uses LEDs to indicate power, status, location/identification. Other LEDs on IFMs, PSUs, fans, and compute nodes indicate status information for those elements of the system.

Figure 9. LEDs on a Cisco UCS X9508 Server Chassis—Front View
Figure 10. LEDs on the Cisco UCS X9508 Server Chassis—Rear View

Interpreting LEDs

Table 2. Chassis, System Fans, and Power Supply LEDs

LED

Color

Description

Locator

LED and button

(callout 1 on the chassis front panel)

Off

Locator not enabled.

Blue

Locates a selected chassis

You can initiate beaconing in UCS Intersight or with the button, which toggles the LED on and off.

Network Status

(callout 1 on the chassis front panel)

Off

Network link state undefined.

Solid Green

Network link state established on at least one IFM, but no traffic detected.

Blinking Green

Network traffic detected on at least one IFM.

System Status

(callout 1 on the chassis front panel)

Solid amber

Chassis is in a degraded operational state. For example:

  • Power Supply Redundancy Lost

  • Mismatched Processors

  • 1 on N Processors Faulty

  • Memory RAS Failure

  • Failed Storage Drive/SSD

Solid Green

Normal operating condition.

Blinking Amber

Chassis is in a critical error state. For example:

  • Boot Failure

  • Fatal Processor and/or bus error detected

  • Loss of both I/O Modules

  • Over Temperature Condition

Off

System is in an undefined operational state or not receiving power.

Fan Module

(callout 3 on the Chassis Rear Panel)

Off

No power to the chassis or the fan module was removed from the chassis.

Amber

Fan module restarting.

Green

Normal operation.

Blinking amber

The fan module has failed.

Power Supplies, each has one a bicolor LED

(callout 2 on the Chassis Front Panel)

Off

Power supply is not fully seated, so no connection exists.

Green

Normal operation.

Blinking green

AC power is present, but the power supply is in Standby mode.

Amber

Any fault condition is detected. Some examples:

  • Over or under voltage

  • Over temperature alarm

  • Power supply has no connection to a power cord.

Blinking Amber

Any warning condition is detected. Some examples:

  • Over voltage warning

  • Over temperature warning

Table 3. Intelligent Fabric Module and Rear Module Blank LEDs

LED

Color

Description

Module Status

(callout 1 and 4 on the Chassis Rear Panel)

Off

No power.

Green

Normal operation.

Amber

Booting or minor temperature alarm.

Blinking amber

POST error or other error condition.

Module Fans

(callout 1 and 4 the Chassis Rear Panel)

Off

Link down.

Green

Link up and operationally enabled.

Amber

Link up and administratively disabled.

Blinking amber

POST error or other error condition.

Table 4. Compute Node Server LEDs

LED

Color

Description

Compute Node Power

(callout 3 on the Chassis Front Panel)

Off

Power off.

Green

Normal operation.

Amber

Standby.

Compute Node Activity

(callout 3 on the Chassis Front Panel)

Off

None of the network links are up.

Green

At least one network link is up.

Compute Node Health

(callout 3 on the Chassis Front Panel)

Off

Power off.

Green

Normal operation.

Amber

Degraded operation.

Blinking Amber

Critical error.

Compute Node Locator

LED and button

(callout 3 on the Chassis Front Panel)

Off

Locator not enabled.

Blinking Blue 1 Hz

Locates a selected compute node—If the LED is not blinking, the compute node is not selected.

You can initiate the LED in UCS Intersight or by pressing the button, which toggles the LED on and off.

Drive Activity

Off

Inactive.

Green

Outstanding I/O to disk drive.

Drive Health

Off

No fault detected, the drive is not installed, or it is not receiving power.

Amber

Fault detected

Flashing Amber 4 Hz

Rebuild drive active.

If the Drive Activity LED is also flashing amber, a drive rebuild is in progress.

Optional Hardware Configuration

As an option, the server chassis can support a GPU-based PCIe node, the Cisco UCS X440p PCIe Node, that pairs with each Cisco UCS X-Series compute node to provide GPU acceleration.

Each PCIe node supports contains:

  • A GPU adapter card supporting zero, one or two, Cisco T4 GPUs (UCSX-GPU-T4-MEZZ).

    Each GPU is connected directly into the GPU adapter card by a x8 Gen 4 PCI connection.

  • A storage adapter and riser card supporting zero, one, or two U.2 NVMe drives. NVMe RAID is supported through Intel VROC key.


Note


For the server chassis to support any number of Cisco UCS X440p PCIe Nodes, both Cisco UCS X9416 Fabric Modules must be installed to provide proper PCIe signaling and connectivity to the node slots on the front of the server chassis.