skip to main content

Lenovo ThinkSystem SC750 V4 Neptune Server

Product Guide

Home
Top

Abstract

The ThinkSystem SC750 V4 Neptune server is the next-generation high-performance server based on the sixth generation Lenovo Neptune® direct water cooling platform.

With two Intel Xeon 6900-series processors, the ThinkSystem SC750 V4 server combines the latest high-performance Intel processors and Lenovo's market-leading water-cooling solution, which results in extreme performance in dense packaging.

This product guide provides essential pre-sales information to understand the SC750 V4 server, its key features and specifications, components and options, and configuration guidelines. This guide is intended for technical specialists, sales specialists, sales engineers, IT architects, and other IT professionals who want to learn more about the SC750 V4 and consider its use in IT solutions.

Introduction

The ThinkSystem SC750 V4 Neptune node is the next-generation high-performance server based on the sixth generation Lenovo Neptune® direct water cooling platform.

With two Intel Xeon 6900-series processors, the ThinkSystem SC750 V4 server combines the latest high-performance Intel processors and Lenovo's market-leading water-cooling solution, which results in extreme performance in dense packaging.

The direct water cooled solution is designed to operate by using warm water, up to 45°C (113°F). Chillers are not needed for most customers, meaning even greater savings and a lower total cost of ownership. Two nodes are installed in a tray and 8 trays are housed in the ThinkSystem N1380 enclosure, a 13U rack mount unit that fits in a standard 19-inch rack.

The Lenovo ThinkSystem SC750 V4 server tray with two distinct two-socket nodes
Figure 1. The Lenovo ThinkSystem SC750 V4 server tray with two distinct two-socket nodes

Did you know?

Lenovo Neptune and the ThinkSystem SC750 V4 are designed to operate at water inlet temperatures as low as the dew point allows, up to 45°C. This design eliminates the need for additional chilling and allows for efficient reuse of the generated heat energy in building heating or adsorption cold water generation. It uses treated de-ionized water as a coolant, a much safer and more efficient alternative to the commonly used PG25 Glycol liquid, reflecting our commitment to environmental responsibility.

Key features

The ThinkSystem SC750 V4 server tray uses a new design, where the trays are mounted vertically in the N1380 enclosure. This new design allows for two servers to be mounted on the tray using the larger Intel Xeon 6900-Series processors, along with memory DIMMs for all 12 memory channels of the processors.

Revolutionary Lenovo Neptune Design

Engineered for large-scale cloud infrastructures and High Performance Computing (HPC), Lenovo ThinkSystem SC750 V4 Neptune excels in intensive simulations and complex modeling. It is designed to handle technical computing, grid deployments, and analytics workloads in various fields such as research, life sciences, energy, engineering, and financial simulation.

At its core, Lenovo Neptune applies 100% direct warm-water cooling, maximizing performance and energy efficiency without sacrificing accessibility or serviceability. The SC750 V4 integrates seamlessly into a standard 19" rack cabinet with the ThinkSystem N1380 Neptune enclosure, featuring a patented blind-mate stainless steel dripless quick connection.

This design ensures easy serviceability and extreme performance density, making the SC750 V4 the go-to choice for compute clusters of all sizes - from departmental/workgroup levels to the world’s most powerful supercomputers – from Exascale to Everyscale®.

Lenovo Neptune utilizes superior materials, including custom copper water loops and patented CPU cold plates, for full system water-cooling. Unlike systems that use low-quality FEP plastic, Neptune features durable stainless steel and reliable EPDM hoses. The  N1380 enclosure features an integrated manifold that offers a patented blind-mate mechanism with aerospace-grade drip-less connectors to the compute trays, ensuring safe and seamless operation.

Compared to an equivalent air-cooled system, the SC750 V4 provides:

  • Up to 10% performance increase through continuous turbo mode
  • Up to 40% data center energy use reduction from server and infrastructure
  • Up to 100% heat removal by water directly on the heat sources
  • Up to 100% noise reduction by server fans in the data center

Lenovo’s direct water-cooled solutions are factory-integrated and are re-tested at the rack-level to ensure that a rack can be directly deployed at the customer site. This careful and consistent quality testing has been developed as a result of over a decade of experience designing and deploying DWC solutions to the very highest standards.

Scalability and performance

The ThinkSystem SC750 V4 server tray and ThinkSystem N1380 enclosure offer the following features to boost performance, improve scalability, and reduce costs:

  • Each SC750 V4 node supports two high-performance Intel Xeon 6900-series processors, 24x TruDDR5 RDIMMs or MRDIMMs, up to two PCIe 5.0 slots for high-speed I/O, and up to two drive bays.
  • 16x SC750 V4 nodes are mounted in 8x trays which are installed vertically in the 13U ThinkSystem N1380 enclosure. With server nodes effectively occupying less than 1U of rack space, it is a dense, scalable, and high performance offering.
  • Each node supports one or two Intel Xeon 6900-series processors. Each processor has:
    • Up to 128 cores and 256 threads
    • Core speeds of up to 2.7 GHz
    • TDP ratings of up to 500 W
  • Support for DDR5 memory DIMMs to maximize the performance of the memory subsystem. Each node supports the following:
    • Up to 24 DDR5 memory DIMMs, 12 DIMMs per processor
    • 12 memory channels per processor (1 DIMM per channel)
    • Support for RDIMMs (operating at 6400 MHz) and MRDIMMs (operating at 8800 MHz)
    • Using 24x 128GB RDIMMs, the server supports up to 3TB of system memory
  • Each node has two front bays that support either slots or E3.S drives, giving the following choices:
    • 2x PCIe 5.0 x16 slots
    • 1x PCIe slot plus 2x E3.S 1T drives (or 1x E3.S 2T drive)
    • 4x E3.S 1T drives (or 2x E3.S 2T drives)
  • In addition, each node supports 2x E3.S 1T drives mounted on cold plates on top of the two processors
  • The server is Compute Express Link (CXL) v2.0 Ready. With CXL 2.0 for next-generation workloads, you can reduce compute latency in the data center and lower TCO. CXL is a protocol that runs across the standard PCIe physical layer and can support both standard PCIe devices as well as CXL devices on the same link.
  • All supported drives are E3.S NVMe drives, to maximize I/O performance in terms of throughput, bandwidth, and latency.
  • The node includes one Gigabit and two 25 Gb Ethernet onboard ports for cost effective networking. High speed networking can be added through the included PCIe slots. The Gigabit port and one of the 25Gb ports support NC-SI for BMC management communication to the BMC in addition to the host communication.
  • The node offers PCI Express 5.0 I/O expansion capabilities that doubles the theoretical maximum bandwidth of PCIe 4.0 (32GT/s in each direction for PCIe 5.0, compared to 16 GT/s with PCIe 4.0). A PCIe 5.0 x16 slot provides 128 GB/s bandwidth, enough to support a 400GbE network connection.

Energy efficiency

The direct water cooled solution offers the following energy efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment:

  • Water cooling eliminates power that is drawn by cooling fans in the enclosure and dramatically reduces the required air movement in the server room, which also saves power. In combination with an Energy Aware Runtime environment, savings as much as 40% are possible in the data center due to the reduced need for air conditioning.
  • Water chillers may not be required with a direct water cooled solution. Chillers are a major expense for most geographies and can be reduced or even eliminated because the water temperature can now be 45°C instead of 18°C in an air-cooled environment.
  • With the new water-cooled power conversion stations and SMM3, essentially 100% system heat recovery is possible, depending on water and ambient temperature chosen. At 45°C water temperature and 30°C room temperature it will be typically around 95% through surface radiated heat. Heat energy absorbed may be reused for heating buildings in the winter, or generating cold through Adsorption Chillers, for further operating expense savings.
  • The processors and other microelectronics are run at lower temperatures because they are water cooled, which uses less power, and allows for higher performance through Turbo Mode.
  • The processors are run at uniform temperatures because they are cooled in parallel loops, which avoid thermal jitter and provides higher and more reliable performance at same power.
  • Low-voltage 1.1V DDR5 memory offers energy savings compared to 1.2V DDR4 DIMMs, an approximately 20% decrease in power consumption
  • 80 PLUS Titanium-compliant power conversion stations ensure energy efficiency.
  • Power monitoring and management capabilities through the System Management Module in the N1380 enclosure.
  • The System Management Module (SMM3) automatically calculates the power boundaries at the enclosure level; the protective capping feature ensures overall power consumption stay within limits, adjusting dynamically based on the server and power conversion station status change.
  • Lenovo power/energy meter based on TI INA228 measures DC power for the CPU at higher than 97% accuracy and 100 Hz sampling frequency to the XCC and can be leveraged both in-band and out-of-band using IPMI raw commands.
  • Optional Lenovo XClarity Energy Manager provides advanced data center power notification, analysis, and policy-based management to help achieve lower heat output and reduced cooling needs.
  • Optional Energy Aware Runtime provides sophisticated power monitoring and energy optimization on a job-level during the application runtime without impacting performance negatively.

Manageability and security

The following powerful systems management features simplify local and remote management of the SC750 V4 server:

  • The server includes an XClarity Controller 3 (XCC3) to monitor server availability. Optional upgrade to XCC3 Premier to provide remote control (keyboard video mouse) functions, support for the mounting of remote media files, FIPS 140-3 security, enhanced NIST 800-193 support, boot capture, power capping, and other management and security features.
  • Lenovo XClarity Administrator offers comprehensive hardware management tools that help to increase uptime, reduce costs, and improve productivity through advanced server management capabilities.
  • Lenovo XClarity Provisioning Manager, based in UEFI and accessible from F1 during boot, provides system inventory information, graphical UEFI Setup, platform update function, RAID Setup wizard, operating system installation function, and diagnostic functions.
  • Support for Lenovo XClarity Energy Manager which captures real-time power and temperature data from the server and provides automated controls to lower energy costs.
  • Support for industry standard management protocols, IPMI 2.0, SNMP 3.0, Redfish REST API, serial console via IPMI
  • The SC750 V4 is enabled with Lenovo HPC & AI Software Stack, so, you can support multiple users and scale within a single cluster environment.
  • Lenovo HPC & AI Software Stack provides our HPC customers you with a fully tested and supported open-source software stack to enable your administrators and users with for the most effective and environmentally sustainable consumption of Lenovo supercomputing capabilities.
  • Our Confluent management system and Lenovo Intelligent Computing Orchestration (LiCO) web portal provides an interface designed to abstract the users from the complexity of HPC cluster orchestration and AI workloads management, making open-source HPC software consumable for every customer.
  • LiCO web portal provides workflows for both AI and HPC, and supports multiple AI frameworks, allowing you to leverage a single cluster for diverse workload requirements.
  • Integrated Trusted Platform Module (TPM) 2.0 support enables advanced cryptographic functionality, such as digital signatures and remote attestation.
  • Supports Secure Boot to ensure only a digitally signed operating system can be used.
  • Industry-standard Advanced Encryption Standard (AES) NI support for faster, stronger encryption.
  • With the System Management Module (SMM) installed in the enclosure, only one Ethernet connection is needed to provide remote systems management functions for all SC750 V4 servers and the enclosure.
  • The SMM3 management module has two Ethernet ports which allows a single Ethernet connection to be daisy chained, up to 99 connections (eg 3 enclosures and 96 servers), thereby significantly reducing the number of Ethernet switch ports needed to manage an entire rack of SC750 V4 servers and N1380 enclosures.
  • The N1380 enclosure and its power conversion stations include drip sensors that monitor the inlet and outlet manifold quick connect couplers; leaks are reported via the SMM.

Availability and serviceability

The SC750 V4 node and N1380 enclosure provide the following features to simplify serviceability and increase system uptime:

  • Designed to run 24 hours a day, 7 days a week
  • With full water cooling, system fans are not required. This results in significantly reduced noise levels on the data center floor, a significant benefit to personnel having to work on site.
  • Depending on the configuration and node population, the N1380 enclosure supports N+1 or N+N power policies for its power conversion stations, which means greater system uptime.
  • Power conversion stations are hot-swappable to minimize downtime.
  • Toolless cover removal on the trays provides easy access to upgrades and serviceable parts, such as adapters and memory.
  • The server uses ECC memory and supports memory RAS features including Single Device Data Correction (SDDC, also known as Chipkill), Patrol/Demand Scrubbing, Bounded Fault, DRAM Address Command Parity with Replay, DRAM Uncorrected ECC Error Retry, On-die ECC, ECC Error Check and Scrub (ECS), and Post Package Repair.
  • Proactive Platform Alerts (including PFA and SMART alerts): Processors, voltage regulators, memory, internal storage (HDDs and SSDs, NVMe SSDs, M.2 storage), fans, power conversion stations, and server ambient and subcomponent temperatures. Alerts can be surfaced through the XClarity Controller to managers such as Lenovo XClarity Administrator and other standards-based management applications. These proactive alerts let you take appropriate actions in advance of possible failure, thereby increasing server uptime and application availability.
  • The XCC offers optional remote management capability and can enable remote keyboard, video, and mouse (KVM) control and remote media for the node.
  • Built-in diagnostics in UEFI, using Lenovo XClarity Provisioning Manager, speed up troubleshooting tasks to reduce service time.
  • Lenovo XClarity Provisioning Manager supports diagnostics and can save service data to a USB key drive or remote CIFS share folder for troubleshooting and reduce service time.
  • Auto restart in the event of a momentary loss of AC power (based on power policy setting in the XClarity Controller service processor)
  • Virtual reseat is a supported feature of the System Management Module which, in essence, removes the node from A/C power and reconnecting the node to AC power from a remote location.
  • There is a three-year customer replaceable unit and onsite limited warranty, with next business day 9x5 coverage. Optional warranty upgrades and extensions are available.

Components and connectors

The front of the tray with two distinct SC750 V4 nodes is shown in the following figure.

Front view of the tray with two ThinkSystem SC750 V4 nodes
Figure 2. Front view of the tray with two ThinkSystem SC750 V4 nodes

The following figure shows key components internal to the server tray.

Inside view of the two SC750 V4 nodes in the water-cooled tray
Figure 3. Inside view of the two SC750 V4 nodes in the water-cooled tray

The compute nodes are installed vertically in the ThinkSystem N1380 enclosure, as shown in the following figure.

Front view of the N1380 enclosure
Figure 4. Front view of the N1380 enclosure

The rear of the N1380 enclosure contains the 4x water-cooled power conversion stations (PCS), water connections, and the System Management Module.

Note: The short hoses that are attached to the water inlet and water outlet have been removed from the figure for clarity.

Rear view of the N1380 enclosure
Figure 5. Rear view of the N1380 enclosure (water hoses removed for clarity)

System architecture

The following figure shows the architectural block diagram of the SC750 V4.

SC750 V4 system architectural block diagram
Figure 6. SC750 V4 system architectural block diagram

Standard specifications - SC750 V4 tray

The following table lists the standard specifications of the SC750 V4 server tray.

Table 1. Standard specifications - SC750 V4 tray
Components Specification
Machine type 7DDJ - 3-year warranty
Form factor Two independent 2-socket nodes mounted on a water-cooled server tray, installed vertically in an enclosure
Enclosure support

ThinkSystem N1380 enclosure

Processor Two Intel Xeon 6900-series processors (formerly codenamed "Granite Rapids AP") per node. Supports processors up to 128 cores, core speeds of up to 3.8 GHz, and TDP ratings of up to 500W. Supports PCIe 5.0 for high performance connectivity to network adapters and NVMe drives.
Chipset None. Integrated into the processor.
Memory 24 DIMM slots with two processors (12 DIMM slots per processor) per node. Each processor has 12 memory channels, with 1 DIMM per channel (DPC). Supports Lenovo TruDDR5 RDIMMs at 6400 MHz and MRDIMMs at 8800 MHz.
Memory maximum Up to 3TB per node with 24x 128GB RDIMMs
Memory protection ECC, SDDC (for x4-based memory DIMMs), ADDDC (for x4-based memory DIMMs), and memory mirroring.
Disk drive bays

Each node supports up to 6x EDSFF E3.S NVMe SSDs:

  • 2x E3.S 1T drives or 1x E3.S 2T drive mounted in a bay in front slot 1 (in lieu of a PCIe slot)
  • 2x E3.S 1T drives or 1x E3.S 2T drive mounted in a bay in front slot 2 (in lieu of a PCIe slot)
  • 1x E3.S 1T drive mounted on top of CPU 1
  • 1x E3.S 1T drive mounted on top of CPU 2
Front slots 1 and 2 can hold either E3.S drives or a PCIe low profile adapter
Maximum internal storage 92.16TB using 6x 15.36TB E3.S NVMe SSDs
Storage controllers

Onboard NVMe ports (RAID using Intel VROC)

Optical drive bays No internal bays; use an external USB drive.
Network interfaces Each node: 2x 25 Gb Ethernet SFP28 onboard connectors based on Broadcom 57414 controller (support 10/25Gb), 1x 1 Gb Ethernet RJ45 onboard connector based on Intel I210 controller. Onboard 1Gb port and 25Gb port 1 can optionally be shared with the XClarity Controller 3 (XCC3) management processor for Wake-on-LAN and NC-SI support.
PCIe slots Each node: 1x or 2x PCIe 5.0 x16 slots with low profile form factor (each slot is mutually exclusive with E3.S drives installed in that bay)
GPUs No support.
Ports External diagnostics port, VGA video port, USB-C DisplayPort video port, 2x USB 3 (5 Gb/s) ports, RJ-45 1GbE systems management port for XCC remote management, mini-USB serial port. Additional ports provided by the enclosure as described in the Enclosure specifications section.
Video Embedded graphics with 16 MB memory with 2D hardware accelerator, integrated into the XClarity Controller 3 management controller. Two video ports (VGA port and USB-C DisplayPort video port); both can be used simultaneously if desired. Maximum resolution is 1920x1200 32bpp at 60Hz.
Security features Power-on password, administrator's password, Trusted Platform Module (TPM), supporting TPM 2.0. In China only, optional Nationz TPM 2.0 plug-in module.
Systems management

Operator panel with status LEDs. Optional External Diagnostics Handset with LCD display. Clarity Controller 3 (XCC3) embedded management based on the ASPEED AST2600 baseboard management controller (BMC), XClarity Administrator centralized infrastructure delivery, XClarity Integrator plugins, and XClarity Energy Manager centralized server power management. Optional XCC3 Premier to enable remote control functions and other features. Lenovo power/energy meter based on TI INA228 for 100Hz power measurements with >97% accuracy.

System Management Module (SMM3) in the N1380 enclosure provides additional systems management functions from power monitoring to liquid leakage detection on chassis, tray and power conversation station level.

Operating systems supported

Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu are Supported & Certified. Rocky Linux and AlmaLinux are Tested. See the Operating system support section for details and specific versions.

Limited warranty Three-year customer-replaceable unit and onsite limited warranty with 9x5 next business day (NBD).
Service and support Optional service upgrades are available through Lenovo Services: 4-hour or 2-hour response time, 6-hour fix time, 1-year or 2-year warranty extension, software support for Lenovo hardware and some third-party applications.
Dimensions Width: 546 mm (21.5 inches), height: 53 mm (2.1 inches), depth: 760 mm (29.9 inches)
Weight 37.2 kg (82 lbs)

Standard specifications - N1380 enclosure

The ThinkSystem N1380 enclosure provides shared high-power and high-efficiency power conversion stations. The SC750 V4 servers connect to the midplane of the N1380 enclosure. This midplane connection is for power (via a 48V busbar) and control only; the midplane does not provide any I/O connectivity.

The following table lists the standard specifications of the enclosure.

Table 2. Standard specifications: ThinkSystem N1380 enclosure
Components Specification
Machine type 7DDH - 3-year warranty
Form factor 13U rack-mounted enclosure
Maximum number of SC750 V4 nodes supported Up to 16x nodes per enclosure in 8x SC750 V4 server trays (2 nodes per tray).
Node support ThinkSystem SC750 V4
Enclosures per rack Up to three N1380 enclosures per 42U or 48U rack
System Management Module (SMM)

The hot-swappable System Management Module (SMM3) is the management device for the enclosure. Provides integrated systems management functions and controls the power and cooling features of the enclosure. Provides remote browser and CLI-based user interfaces for remote access via the dedicated Gigabit Ethernet port. Remote access is to both the management functions of the enclosure as well as the XClarity Controller 3 (XCC3) in each node.

The SMM has two Ethernet ports which enables a single incoming Ethernet connection to be daisy chained across 3 enclosures and 48 nodes, thereby significantly reducing the number of Ethernet switch ports needed to manage an entire rack of SC750 V4 nodes and enclosures.

Ports Two RJ45 port on the rear of the enclosure for 10/100/1000 Ethernet connectivity to the SMM3 for power and cooling management.
I/O architecture None integrated. Use top-of-rack networking and storage switches.
Power supplies Up to 4x water-cooled hot-swap power conversion stations (PCS), depending on the power policy and the power requirements of the installed server node trays. PCS units are installed at the rear of the enclosure. Single power domain supplies power to all nodes. Optional redundancy (N+1 or N+N) and oversubscription, depending on configuration and node population. 80 PLUS Titanium compliant. Built-in overload and surge protection.
Cooling Direct water cooling supplied by water hoses connected to the rear of the enclosure.
System LEDs SMM has four LEDs: system error, identification, status, and system power. Each power conversion station (PCS) has AC, DC, and error LEDs. Nodes have more LEDs.
Systems management Browser-based enclosure management through an Ethernet port on the SMM at the rear of the enclosure. Integrated Ethernet switch provides direct access to the XClarity Controller (XCC) embedded management of the installed nodes. Nodes provide more management features.
Temperature
  • Operating water temperature:
    • 2°C to 50°C (35.6°F to 122°F) (ASHRAE W45 compliant)
  • Operating air temperature:
    • 10°C - 35°C (50°F - 95°F) (ASHRAE A2 compliant)

See Operating Environment for more information.

Electrical power 3-phase 200V-480Vac
Power cords Custom power cables for direct data center attachment, 1 dedicated (32A) or 1 shared (63A) per power module
Limited warranty Three-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.
Dimensions Width: 540 mm (21.3 in.), height: 572 mm (22.5 in.), depth: 1302 mm (51.2 in.). See Physical and electrical specifications for details.
Weight
  • Empty enclosure (with midplane and cables): 94 kg (208 lbs)
  • Fully configured enclosure with 4x power conversion stations and 8x SC750 V4 server trays: 484.5 kg (1069 lbs)

Models

The SC750 V4 node and N1380 enclosure are configured by using the configure-to-order (CTO) process with the Lenovo Cluster Solutions configurator (x-config).

Tip: The SC750 V4 and N1380 are not configurable in DCSC.

The following table lists the base CTO models and base feature codes

Table 3. Base CTO models
Machine Type/Model Feature code Description
7DDJCTOLWW BYK4 ThinkSystem SC750 V4 Neptune Tray (3-Year Warranty)
7DDHCTOLWW BYJZ ThinkSystem N1380 Neptune Enclosure (3-Year Warranty)

Processors

The SC750 V4 node supports two processors as follows:

  • Two Intel Xeon 6900-series processors (formerly codenamed "Granite Rapids AP")

Note: A configuration of one processor is not supported.

Topics in this section:

Processor options

All supported Intel Xeon 6900P-series processors have the following characteristics:

  • 12 DDR5 memory channels at 1 DIMM per channel
  • 6 UPI links between processors at 24 GT/s
  • 96 PCIe 5.0 I/O lanes

The following table lists the Intel Xeon 6900P-series processors that are supported by the SC750 V4.

Table 4. Intel Xeon 6 processor support
Part
number
Feature
code
SKU Description Quantity
Intel Xeon 6900-series with P-cores
CTO only C4TT 6952P Intel Xeon 6952P 96C 400W 2.1GHz Processor 2
CTO only C2WS 6960P Intel Xeon 6960P 72C 500W 2.7GHz Processor 2
CTO only C2WR 6972P Intel Xeon 6972P 96C 500W 2.4GHz Processor 2
CTO only BYK5 6979P Intel Xeon 6979P 120C 500W 2.1GHz Processor 2
CTO only C2WQ 6980P Intel Xeon 6980P 128C 500W 2.0GHz Processor 2

Processor features

The following table summarizes the key features of the Intel Xeon 6900-series processors with P-cores that are supported in the SC750 V4.

Table 5. Intel Xeon 6700-series processor features
CPU
model
Cores/
threads*
Core speed
(Base / TB max)
L3 cache Mem.
chan
Max memory
speed
UPI 2.0 links
& speed
PCIe
lanes
TDP Accelerators SGX Enclave
Size
QAT
DLB
DSA
IAA
Intel Xeon 6900-series with P-cores
6952P 96 / 192 2.1 / 3.9 GHz 480 MB 12 6400 MHz 6 / 24 GT/s 96 400W 4 4 4 4 512GB
6960P 72 / 144 2.7 / 3.9 GHz 432 MB 12 6400 MHz 6 / 24 GT/s 96 500W 4 4 4 4 512GB
6972P 96 / 192 2.4 / 3.9 GHz 480 MB 12 6400 MHz 6 / 24 GT/s 96 500W 4 4 4 4 512GB
6979P 120 / 240 2.1 / 3.9 GHz 504 MB 12 6400 MHz 6 / 24 GT/s 96 500W 4 4 4 4 512GB
6980P 128 / 256 2 / 3.9 GHz 504 MB 12 6400 MHz 6 / 24 GT/s 96 500W 4 4 4 4 512GB

* E-core processors do not offer Hyper-Threading

Processors supported by the SC750 V4 include embedded accelerators to add even more processing capability:

  • QuickAssist Technology (Intel QAT)

    Help reduce system resource consumption by providing accelerated cryptography, key protection, and data compression with Intel QuickAssist Technology (Intel QAT). By offloading encryption and decryption, this built-in accelerator helps free up processor cores and helps systems serve a larger number of clients.

  • Intel Dynamic Load Balancer (Intel DLB)

    Improve the system performance related to handling network data on multi-core Intel Xeon Scalable processors. Intel Dynamic Load Balancer (Intel DLB) enables the efficient distribution of network processing across multiple CPU cores/threads and dynamically distributes network data across multiple CPU cores for processing as the system load varies. Intel DLB also restores the order of networking data packets processed simultaneously on CPU cores.

  • Intel Data Streaming Accelerator (Intel DSA)

    Drive high performance for storage, networking, and data-intensive workloads by improving streaming data movement and transformation operations. Intel Data Streaming Accelerator (Intel DSA) is designed to offload the most common data movement tasks that cause overhead in data center-scale deployments. Intel DSA helps speed up data movement across the CPU, memory, and caches, as well as all attached memory, storage, and network devices.

  • Intel In-Memory Analytics Accelerator (Intel IAA)

    Run database and analytics workloads faster, with potentially greater power efficiency. Intel In-Memory Analytics Accelerator (Intel IAA) increases query throughput and decreases the memory footprint for in-memory database and big data analytics workloads. Intel IAA is ideal for in-memory databases, open source databases and data stores like RocksDB, Redis, Cassandra, and MySQL.

The processors also support a separate and encrypted memory space, known as the SGX Enclave, for use by Intel Software Guard Extensions (SGX). The size of the SGX Enclave supported varies by processor model. Intel SGX offers hardware-based memory encryption that isolates specific application code and data in memory. It allows user-level code to allocate private regions of memory (enclaves) which are designed to be protected from processes running at higher privilege levels.

UEFI operating modes

The SC750 V4 offers preset operating modes that affect energy consumption and performance. These modes are a collection of predefined low-level UEFI settings that simplify the task of tuning the server to suit your business and workload requirements.

The following table lists the feature codes that allow you to specify the mode you wish to preset in the factory for CTO orders.

Table 6. UEFI operating mode presets in DCSC
Feature code Description
C3JB General Computing - Power Efficiency
C3JA General Computing - Peak Frequency
C3J9 General Computing - Max Performance
C3J8 High Performance Computing (HPC) (default)

The preset modes for the SC750 V4 are as follows:

  • General Computing - Power Efficiency (feature C3JB): This workload profile optimizes the performance per watt efficiency with a bias towards performance. This workload profile is analogous to “Efficiency – Favor Performance” operating mode on ThinkSystem V3 servers. This profile contains settings for ENERGY STAR® and ERP Lot9 compliance.
  • General Computing - Peak Frequency (feature C3JA): This workload profile is defined by the requirement to drive the highest core frequencies out of a processor across a subset of cores available – not for all cores active. This workload profile benefits workloads requiring either high per core and / or overall CPU package frequency. These workloads may have variable resource demands, are relatively insensitive to overall platform latency, and are generally CPU clock constrained. Tuning a system for highest possible core frequency may mean allowing inactive cores to transfer in and out of sleep states (C-states), which allows active cores to run at higher frequency for different durations of time. Allowing cores to go into low power states allows for higher per core frequency but can introduce “jitter” in the systems clock frequency.
  • General Computing - Max Performance (feature C3J9): This workload profile maximizes the absolute performance of the system without regard for power savings. Power savings features are disabled. This operating mode should be used when an application can sustain work across all cores simultaneously and is Non-uniform Memory Access (NUMA) aware.
  • High Performance Computing (HPC) (feature C3J8): This profile is for customers running large-scale scientific and engineering workloads. These environments tend to be clustered environments where each node performs at maximum utilization for extended periods of time, and the application is Non-uniform Memory Access (NUMA) aware.

Memory

The SC750 V4 uses Lenovo TruDDR5 RDIMM memory operating at 6400 MHz and MRDIMM memory operating at 8800 MHz. The server supports up to 24 DIMMs with 2 processors. The processors have 12 memory channels and support 1 DIMM per channel (DPC). The server supports up to 3TB of memory using 24x 128GB RDIMMs and two processors.

Lenovo TruDDR5 memory uses the highest quality components that are sourced from Tier 1 DRAM suppliers and only memory that meets the strict requirements of Lenovo is selected. It is compatibility tested and tuned to maximize performance and reliability. From a service and support standpoint, Lenovo TruDDR5 memory automatically assumes the system warranty, and Lenovo provides service and support worldwide.

The following table lists the memory options that are currently supported by the SC750 V4.

Table 7. Memory options
Part number Feature code Description Quantity
supported
x4 RDIMMs - 6400 MHz
4X77A90966 C0TQ ThinkSystem 64GB TruDDR5 6400MHz (2Rx4) RDIMM 24
4X77A90997 BZ7D ThinkSystem 96GB TruDDR5 6400MHz (2Rx4) RDIMM 24
4X77A90993 C0U1 ThinkSystem 128GB TruDDR5 6400MHz (2Rx4) RDIMM 24
x8 RDIMMs - 6400 MHz
4X77A90965 BYTJ ThinkSystem 32GB TruDDR5 6400MHz (2Rx8) RDIMM 24
4X77A90996 BZ7C ThinkSystem 48GB TruDDR5 6400MHz (2Rx8) RDIMM 24
MRDIMMs - 8800 MHz
4X77A90998 C0TY ThinkSystem 32GB TruDDR5 8800MHz (2Rx8) MRDIMM 24
4X77A90999 C0TX ThinkSystem 64GB TruDDR5 8800MHz (2Rx4) MRDIMM 24

The following rules apply when selecting the memory configuration:

  • The SC750 V4 only supports quantities of 12 DIMMs per processor (24 DIMMs total), so that all memory channels are populated
  • Mixing of DIMMs is not supported; all DIMMs must the same part numbers

The following memory protection technologies are supported:

  • ECC detection/correction
  • Bounded Fault detection/correction
  • SDDC (for 10x4-based memory DIMMs; look for "x4" in the DIMM description)
  • ADDDC (for 10x4-based memory DIMMs)
  • Memory mirroring

See the following Lenovo Press papers for more information:

If memory channel mirroring is used, then DIMMs must be installed in pairs (minimum of one pair per processor), and both DIMMs in the pair must be identical in type and size. 50% of the installed capacity is available to the operating system. Memory rank sparing is not supported.

GPU accelerators

GPUs are not offered in the SC750 V4.

For GPU support in a water-cooled solution, the ThinkSystem SC777 V4, SR780a V3, SD650-N V3 and SD665-N V3 all enable high-performance water-cooled NVIDIA GPUs.

For details, see the related product guide:

Internal storage

The SC750 V4 node supports up to 6x E3.S SSDs. These are internal drives - they are not front accessible and are non-hot-swap.

The locations of the drives are as shown below:

  • 2x E3.S 1T drives or 1x E3.S 2T drive mounted in a bay in front slot 1
  • 2x E3.S 1T drives or 1x E3.S 2T drive mounted in a bay in front slot 2
  • 1x E3.S 1T drive mounted on top of CPU 1
  • 1x E3.S 1T drive mounted on top of CPU 2

The drive bays in slot 1 and slot 2 may instead be configured as PCIe slots as described in the I/O expansion options section.

The following figure shows the location of the drives.

SC750 V4 internal drive bays
Figure 7. SC750 V4 internal drive bays

Configuration notes:

  • The node supports only supports NVMe drives
  • The drives are connected to onboard controllers; RAID functionality is provided using Intel VROC. Details are in the Intel VROC onboard RAID section.
  • NVMe drives are connected to CPUs as follows:
    • Drives in front bay 1 connect to CPU 1
    • Drives in front bay 2 connect to CPU 2
    • Drives mounted on the CPUs are both connected to CPU 2

The feature codes to select the appropriate storage cage are listed in the following table.

Table 8. Drive mounting kits
Part number Feature code Description Max per node
E3.S 1T drive mounts on processors
4XF7A99478 BYTL ThinkSystem Neptune Front CPU E3.S 1T Mounting Kit 1
4XF7A99479 BYTM ThinkSystem Neptune Rear CPU E3.S 1T Mounting Kit 1
E3.S 1T drive mounts in front slots
4XF7A99476 BYLE ThinkSystem SC750 V4 Neptune Dual E3.S 1T Kit - Slot 1 1
4XF7A99477 BYLD ThinkSystem SC750 V4 Neptune Dual E3.S 1T Kit - Slot 2 1
CTO only BYLG ThinkSystem SC750 V4 Neptune Single E3.S 1T Kit - Slot 1 1
CTO only BYLF ThinkSystem SC750 V4 Neptune Single E3.S 1T - Slot 2 1

Controllers for internal storage

The drives of the SC750 V4 are connected to integrated NVMe storage controllers.

RAID functionality is provided by Intel VROC, for both SATA drives and NVMe drives.

Intel VROC onboard RAID

Intel VROC (Virtual RAID on CPU) is a feature of the Intel processor that enables Integrated RAID support.

On the SC750 V4, Intel VROC provides RAID functions for the onboard NVMe controller (Intel VROC NVMe RAID).

VROC NVMe RAID offers RAID support for any NVMe drives directly connected to the ports on the server's system board or via adapters such as NVMe retimers or NVMe switch adapters. On the SC750 V4, RAID levels implemented are based on the VROC feature selected as indicated in the following table. RAID 1 is limited to 2 drives per array, and RAID 10 is limited to 4 drives per array. Hot-spare functionality is also supported.

Performance tip: For best performance with VROC NVMe RAID, the drives in an array should all be connected to the same processor. Spanning processors is possible however performance will be unpredictable and should be evaluated based on your workload.

The SC750 V4 supports the VROC NVMe RAID offerings listed in the following table.

Table 9. Intel VROC NVMe RAID ordering information and feature support
Part
number
Feature
code
Description Intel
NVMe SSDs
Non-Intel
NVMe SSDs
RAID 0 RAID 1 RAID 10 RAID 5
4L47A92670 BZ4W Intel VROC RAID1 Only Yes Yes No Yes No No
4L47A83669 BR9B Intel VROC (VMD NVMe RAID) Standard Yes Yes Yes Yes Yes No
4L47A39164 B96G Intel VROC (VMD NVMe RAID) Premium Yes Yes Yes Yes Yes Yes

Configuration notes:

  • If a feature code is ordered in a CTO build, the VROC functionality is enabled in the factory. For field upgrades, order a part number and it will be fulfilled as a Feature on Demand (FoD) license which can then be activated via the XCC management processor user interface.

Virtualization support: Virtualization support for Intel VROC is as follows:

  • VROC (VMD) NVMe RAID: VROC (VMD) NVMe RAID is supported by ESXi, KVM, Xen, and Hyper-V. ESXi support is limited to RAID 1 only; other RAID levels are not supported. Windows and Linux OSes support VROC RAID NVMe, both for host boot functions and for guest OS function, and RAID-0, 1, 5, and 10 are supported. On ESXi, VROC is supported with both boot and data drives.

 

Internal drive options

The following tables list the drive options for internal storage of the server.

SED support: The tables include a column to indicate which drives support SED encryption. The encryption functionality can be disabled if needed. Note: Not all SED-enabled drives have "SED" in the description.

Table 10. E3.S EDSFF trayless PCIe 5.0 NVMe SSDs
Part number Feature
code
Description SED
support
Max
Qty
E3.S trayless SSDs - PCIe 5.0 NVMe - Mixed Use/Mainstream (3-5 DWPD)
4XB7A95514 C2VK ThinkSystem E3.S CD8P 1.6TB Mixed Use NVMe PCIe 5.0 x4 Trayless SSD Support 6
4XB7A95515 C2VL ThinkSystem E3.S CD8P 3.2TB Mixed Use NVMe PCIe 5.0 x4 Trayless SSD Support 6
4XB7A95516 C2VM ThinkSystem E3.S CD8P 6.4TB Mixed Use NVMe PCIe 5.0 x4 Trayless SSD Support 6
CTO only C2VN ThinkSystem E3.S CD8P 12.8TB Mixed Use NVMe PCIe 5.0 x4 Trayless SSD Support 4*
E3.S trayless SSDs - PCIe 5.0 NVMe - Read Intensive/Entry (<3 DWPD)
4XB7A95518 C2VP ThinkSystem E3.S CD8P 1.92TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 6
4XB7A95519 C2VQ ThinkSystem E3.S CD8P 3.84TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 6
CTO only C2VR ThinkSystem E3.S CD8P 7.68TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 4*
CTO only C2VS ThinkSystem E3.S CD8P 15.36TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 4*
4XB7A93367 C67G ThinkSystem E3.S PM9D3a 1.92TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 6
4XB7A93368 C67H ThinkSystem E3.S PM9D3a 3.84TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 6
4XB7A93369 C67J ThinkSystem E3.S PM9D3a 7.68TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 6
4XB7A93370 C67K ThinkSystem E3.S PM9D3a 15.36TB Read Intensive NVMe PCIe 5.0 x4 Trayless SSD Support 6

* These drives are CTO only. These high-capacity CD8P drives are not normally supported in the front bays when the Dual E3.S 1T Kits are configured, ThinkSystem SC750 V4 Neptune Dual E3.S 1T Kit - Slot 1 (4XF7A99476) and ThinkSystem SC750 V4 Neptune Dual E3.S 1T Kit - Slot 2 (4XF7A99477)

Optical drives

The server supports the external USB optical drive listed in the following table.

Table 11. External optical drive
Part number Feature code Description
7XA7A05926 AVV8 ThinkSystem External USB DVD RW Optical Disk Drive

The drive is based on the Lenovo Slim DVD Burner DB65 drive and supports the following formats: DVD-RAM, DVD-RW, DVD+RW, DVD+R, DVD-R, DVD-ROM, DVD-R DL, CD-RW, CD-R, CD-ROM.

I/O expansion options

Each SC750 V4 node supports 1x or 2x slots depending on the configuration:

  • Slot 1: PCIe 5.0 x16 slot connected to CPU 1
  • Slot 2: PCIe 5.0 x16 slot connected to CPU 2

The location of the slots is shown in the following figure.

SC750 V4 slot choices
Figure 8. SC750 V4 PCIe slots

Both slots are implemented using a 1-slot riser. Ordering information is in the following table.

Table 12. Riser ordering information
Part number Feature code Description
4TA7A99474 BYKY ThinkSystem SC750 V4 Neptune HHHL PCIe Riser Kit - Slot 1
4TA7A99475 BYK2 ThinkSystem SC750 V4 Neptune HHHL PCIe Riser Kit - Slot 2

One or both of these slots can alternatively be configured as internal E3.S NVMe drive bays, as described in the Internal storage section.

The following figure shows the four different networking slot configurations, including Shared I/O and Socket Direct.

SC750 V4 slot configurations
Figure 9. SC750 V4 slot configurations

Onboard ports

The SC750 V4 has three onboard network ports:

  • 2x 25GbE ports, connected to an onboard Broadcom 57414 controller, implemented with SFP28 cages for optical or copper connections. Supports 1Gb, 10Gb and 25Gb connections.
  • 1x 1GbE port, connected to an onboard Intel I210 controller, implemented with an RJ45 port for copper cabling

Locations of these ports is shown in the Components and connectors section. The 1GbE port and 25GbE Port 1 both support NC-SI for remote management. For factory orders, to specify which ports should have NC-SI enabled, use the feature codes listed in the Remote Management section. If neither is chosen, both ports will have NC-SI disabled by default.

For the specifications of the 25GbE ports including the supported transceivers and cables, see the Broadcom 57414 product guide:
https://lenovopress.lenovo.com/lp0781-broadcom-57414-25gb-ethernet-adapters

Shared I/O

The SC750 V4 supports a feature called Shared I/O (SharedIO), also known as NVIDIA Networking Multi-Host technology, where a single network connection is shared between the two nodes in a tray. With Shared I/O, the NVIDIA adapter (feature BKSL or BKSP) is installed in one node in a tray, and a cable (feature BZ7E) is used to connect the adapter to a PCIe connector on the adjacent node in the same tray. The result is that the two nodes share the network connection of the adapter with significant savings both in the cost of the adapters but also the cost of switch ports.

Configuration rules:

  • Shared I/O requires the use of cable ThinkSystem SC750 V4 ConnectX-7 Auxiliary Cable, option 4X97B01936, feature BZ7E
  • The adapter must be installed in Slot 1 of node B. The cable connects the adapter to the onboard connector for Slot 1 of node A

Tip: The ConnectX-7 NDR200/HDR adapter (feature BKSL) can be used either standalone (maximum 2 per node or 4 per tray) or as part of a Shared I/O configuration (maximum 1 per tray).

Socket Direct

The SC750 V4 also supports a feature called Socket Direct where an adapter installed in a node is connected to both processors in that node at the same time. Socket Direct enables direct PCIe access to both processors, eliminating the need for network traffic having to traverse the inter-process bus. This optimizes overall system performance and maximum throughput for the most demanding applications and markets.

In the SC750 V4, Socket Direct is achieved by installing the NVIDIA adapter in slot 2 (connects to CPU 2) and a cable (feature BZ7E) is used to connect the adapter to the PCIe connector for slot 1 (which connects to CPU 1).

Configuration rules:

  • Socket Direct requires the use of cable ThinkSystem SC750 V4 ConnectX-7 Auxiliary Cable, option 4X97B01936, feature BZ7E
  • The adapter must be installed in Slot 2 of each node. The cable connects the adapter to the onboard connector for Slot 1
  • Slot 1 must remain empty if Socket Direct is configured; no drive or adapter can be installed
  • By default, the ConnecX-7 adapter connects at PCIe Gen5 and the Auxiliary Cable connects at PCIe Gen4. This can be changed in UEFI.

Network adapters

The SC750 V4 supports one or two network adapters installed in the PCIe slots. The following table lists the supported adapters. The node supports the SharedIO or Socket Direct features as indicated in the table.

Table 13. PCIe network adapters and internal cables
Part number Feature
code
Description Supported connections*
Standalone
adapter
SharedIO Socket
Direct
200Gb Ethernet / NDR200 InfiniBand Adapters
4XC7A99370 BKSL ThinkSystem SC750 V4 NVIDIA ConnectX-7 NDR200/HDR/200GbE QSFP112 2-Port PCIe Gen5 x16 Adapter (SharedIO) DWC Yes (2 max) Yes (1 per tray) No
400 Gb NDR InfiniBand Adapters
4XC7A99368 BKSN ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter DWC Yes (2 max) No Yes (1 max)
4XC7A99369 BKSP ThinkSystem NVIDIA ConnectX-7 NDR OSFP400 1-Port PCIe Gen5 x16 InfiniBand Adapter (SharedIO) DWC No Yes (1 per tray) No
Internal cables
4X97B01936 BZ7E ThinkSystem SC750 V4 ConnectX-7 Auxiliary Cable No Yes (1 per tray) No

* Numbers in parenthesis are the maximum supported per node for that configuration, except for the ShareIO quantities, which are maximums per tray (2 nodes)

For more information, including the transceivers and cables that each adapter supports, see the list of Lenovo Press Product Guides in the Networking adapters category:
https://lenovopress.com/servers/options/ethernet

Storage host bus adapters

The SC750 V4 does not support storage host bus adapters.

Cooling

One of the most notable features of the ThinkSystem SC750 V4 offering is direct water cooling. Direct water cooling (DWC) is achieved by circulating the cooling water directly through cold plates that contact the CPU thermal case, DIMMs, drives, adapters, and other high-heat-producing components in the node.

One of the main advantages of direct water cooling is the water can be relatively warm and still be effective because water conducts heat much more effectively than air. Depending on the environmentals like water and air temperature, effectively 100% of the heat can be removed by water cooling; in configurations that stay slightly below that, the rest can be easily managed by a standard computer room air conditioner. Measured data at a customer data center shows 98% heat capture at 45°C water inlet temperature and 99% heat capture at 40°C water inlet temperature and 26.6°C ambient temperature with insulated racks using the SD650-N V2.

Allowable inlet temperatures for the water can be as high as 45°C (113°F) with the SC750 V4. In most climates, water-side economizers can supply water at temperatures below 45°C for most of the year. This ability allows the data center chilled water system to be bypassed thus saving energy because the chiller is the most significant energy consumer in the data center. Typical economizer systems, such as dry-coolers, use only a fraction of the energy that is required by chillers, which produce 6-10°C (43-50°F) water. The facility energy savings are the largest component of the total energy savings that are realized when the SC750 V4 is deployed.

The advantages of the use of water cooling over air cooling result from water’s higher specific heat capacity, density, and thermal conductivity. These features allow water to transmit heat over greater distances with much less volumetric flow and reduced temperature difference as compared to air.

For cooling IT equipment, this heat transfer capability is its primary advantage. Water has a tremendously increased ability to transport heat away from its source to a secondary cooling surface, which allows for large, more optimally designed radiators or heat exchangers rather than small, inefficient fins that are mounted on or near a heat source, such as a CPU.

The SC750 V4 offering uses the benefits of water by distributing it directly to the highest heat generating node subsystem components. By doing so, the offering realizes 7% - 10% direct energy savings when compared to an air-cooled equivalent. That energy savings results from the removal of the system fans and the lower operating temp of the direct water-cooled system components.

The direct energy savings at the enclosure level, combined with the potential for significant facility energy savings, makes the SC750 V4 an excellent choice for customers that are burdened by high energy costs or with a sustainability mandate.

Water is delivered to the enclosure from a coolant distribution unit (CDU) via water hoses, inlet and return. As shown in the following figure, each enclosure connects to the CDU via rack supply hoses. Typically each enclosure connects to the rack supply directly, however two adjacent enclosures can connected in series (daisy chain) to use the same supply, provided the inlet water of the first enclosure is 30°C.

The midplane in the enclosure routes the water via QuickConnects to each of the eight nodes, and each of the four power conversion stations.

N1380 enclosure and manifold assembly
Figure 10. N1380 enclosure water connections

The water flows through the SC750 V4 tray to cool all major heat-producing components. The inlet water is split into two parallel paths, one for each node in the tray. Each path is then split further to cool the processors, memory, drives, and adapters.

During the manufacturing and test cycle, Lenovo’s water-cooled nodes are pressure tested with Helium according to ASTM E499 / E499M – 11 (Standard Practice for Leaks Using the Mass Spectrometer Leak Detector in the Detector Probe Mode) and later again with Nitrogen to detect micro-leaks which may be undetectable by pressure testing with water and/or a water/glycol mixture as Helium and Nitrogen have smaller molecule sizes.

This approach also allows Lenovo to ship the systems pressurized without needing to send hazardous antifreeze-components to our customers.

Onsite the materials used within the water loop from the CDU to the nodes should be limited to copper alloys with brazed joints, Stainless steels with TIG and MIG welded joints and EPDM rubber. In some instances, PVC might be an acceptable choice within the facility.

The water the system is filled with must be reasonably clean, bacteria-free water (< 100 CFU/ml) such as de-mineralized water, reverse osmosis water, de-ionized water, or distilled water. It must be filtered with in-line 50 micron filter. Biocide and Corrosion inhibitors ensure a clean operation without microbiological growth or corrosion. For details about the implementation of Lenovo Neptune water cooling, see the document Lenovo Neptune Direct Water-Cooling Standards.

Lenovo Data Center Power and Cooling Services can support you in the design, implementation and maintenance of the facility water-cooling infrastructure.

Water connections

Water connections to the N1380 enclosure are provided using hoses that connect directly to the coolant distribution unit (CDU), either an in-rack CDU or in-row CDU. As shown in the following figure, hose lengths required depend on the placement of the N1380 in the rack cabinet.

Hose lengths based on the position of the N1380 enclosure in the rack cabinet
Figure 11. Hose lengths based on the position of the N1380 enclosure in the rack cabinet

Water connections are made up of the following:

  • Short hoses permanently attached to the rear of the N1380 enclosure
  • Intermediate hoses, orderable as three different pairs of hoses, based on the location of the N1380 enclosure in the rack. Ordering information for these hoses is listed in the table below.
  • Connection to the CDU through additional hoses or valves on the data center water loop. Lenovo Datacenter Services can provide the end-to-end enablement for your water infrastructure needs.

The Intermediate hoses have a Sanitary Flange with a 1.5-inch Tri-Clamp at the CDU/data center end, as shown in the figure below.

Connections of the Intermediate hoses
Figure 12. Connections of the Intermediate hoses

Ordering information for the Intermediate hoses to supply and return the water are listed in the following table. Hoses can either be stainless steel or EPDM, depending on the data center requirements.

Table 14. Water connection ordering information
Part number Feature code Description Purpose
Stainless steel hoses*
CTO only C2R0 ThinkSystem N1380 Neptune Stainless Steel Hose Connection Internal water manifold for stainless steel hoses
EPDM hoses
CTO only BYKK ThinkSystem N1380 Neptune EPDM Hose Connection Internal water manifold for EPDM hoses
4XH7A99483 C2KQ ThinkSystem N1380 Neptune 2.2M and 2.8M EPDM Hose Set 2.24m and 2.82m EPDM intermediate hoses for an N1380 installed starting in rack position U1-U9
4XH7A99484 C2KP ThinkSystem N1380 Neptune 2.8M and 3.4M EPDM Hose Set 2.82m and 3.4m EPDM intermediate hoses for an N1380 installed starting in rack position U10-U22
4XH7A99485 C2KN ThinkSystem N1380 Neptune 3.4M and 3.8M EPDM Hose Set 3.4m and 3.8m EPDM intermediate hoses for an N1380 installed starting in rack position U23-U36

* Contact your local Lenovo sales representative or business partner for more information on configuring the stainless steel hoses

Power conversion stations

The N1380 enclosure supports up to four water-cooled Power Conversion Stations. The use of water-cooled power conversion stations enables an even greater amount of heat can be removed from the data center using water instead of air-conditioning.

The Power Conversion Stations supply internal system power to a 48V busbar. This innovative design merges power conversion, rectification, and distribution into a single unit, a departure from traditional setups that require separate units, including separate power supplies, resulting in best-in-class efficiency.

N1380 power modules
Figure 13. N1380 power conversion stations

The N1380 supports the power conversion stations listed in the following table.

Table 15. N1380 power conversion stations
Part number Feature code Description Quantities Efficiency Connector Input
4P57A82022 BYKH ThinkSystem N1380 Neptune 15kW 3-Phase 200-480V Titanium Power Conversion Station 2, 3, 4 Titanium Harting Han-ECO 10B 3-phase 200V-480Vac
34A per phase

Quantities of 2, 3 or 4 are supported. The power conversion stations provide N+1 redundancy with oversubscription, depending on population and configuration of the node trays. Power policies with N+N redundancy with 4x power conversion stations are supported. Power policies with no redundancy are also supported.

Tip: Use Lenovo Capacity Planner to determine the power needs for your rack installation. See the Lenovo Capacity Planner section for details.

The power conversion stations have the following features:

  • 80 PLUS Titanium certified
  • Input: 180~528Vac (nominal 200~480Vac), 3-phase, WYE or Delta
  • Input current:
    • Delta: 46.53A (34A per phase)
    • WYE: 23.79A (34A per phase)
  • Supports N+N, N+1 power redundancy or non-redundant power configurations:
  • Power management configured through the SMM
  • Built-in overload and surge protection

Power policies and power output

The following table lists the enclosure power capacity based on the power policy and the number of power modules installed.

Table 16. Power capacity based on power policy selected
Number of PCS units in Nominal State No redundancy (N+0) N+N redundancy N+1 redundancy
N+0 Policy (Note 1) Chassis capacity (DC) N+N Policy (Notes 2,4,5) Chassis capacity (DC) N+1 Policy (Notes 3,4,5) Chassis capacity (DC)
1 1+0 15 KW - - - -
2 2+0 30 KW Use 1+1 instead → 1+1 18 KW (15 KW)*
3 3+0 45 KW - - 2+2 36 KW (30 KW)*
4 4+0 (Note 6) 2+2 36 KW (30 KW)* 3+1 54 KW (45 KW)*

* The numbers in parentheses are the chassis power capacity if no throttling is required when the redundancy is degraded (e.g. 3+1 and 1 PSU fails)

Notes in the table:

  1. N+1+OVS (oversubscription) is the default if >1 PSU is installed. N+0 is the default with 1 PSU installed.
  2. If N+N is selected and there is an AC domain fault (2 PSUs go down), the power capacity is reduced to the value shown in parentheses within 1 second. Example: If 2+2 policy is selected & all 4 PSUs are operational, the power capacity is 36KWDC. If an AC domain faults, 2 PSU go down, and the power is reduced to 30KW within 1 second
  3. If N+1 is selected and there is a PSU fault, the power capacity is reduced to <= the value shown in parentheses within 1 second. Example: If 3+1 policy is selected & all 4 PSUs are operational, the power capacity is 54KWDC. If a PSU faults, the power is reduced to <=45KW within 1 second. GPU trays are hard throttled w PWRBRK#. Intel trays are proprotionally throttled with Psys power capping
  4. If the chassis is in the degraded redundancy state, due to 1 or more PSU faults, and an additional PSU fails, the chassis will attempt to stay alive by hard-throttling the trays (via PROCHOT# & PWRBRK#), but there is no guarantee. There is risk that the chassis could power off if the PSU rating is exceeded
  5. For the redundant power policies, only OVS modes are selectable in SMM (ie. OVS is always enabled). When a user creates a hardware config in DCSC, a chassis max power estimate is completed, via an LCP API call. If the chassis DC max power is in between the chassis degraded power capacity shown in parentheses and the chassis normal power capacity, a warning is displayed stating that the server could be throttled if a PSU fails. If the customer doesn’t want potential throttling, they either need to reduce their config or select a different combo of PSUs/policy to achieve a higher power budget
  6. Possible future feature

Power cables

The power conversion stations function at a 32A current, but the power cables enable two PCS units to share a single 63A connection from the data center. Consequently, for racks fully equipped with 12 power conversion stations, only six power connections from the data center are needed.

The N1380 enclosure supports the following power cables.

Table 17. Power cables
Part number Feature code Description Connector Quantity required
Power cables routed from the floor
4L67A90303 BYKD 3M, 32A 200-480V, 3-Phase Delta IEC Floor Rack Power Cable 32A IEC plug (3ph+G+N) 1 per PCS
4L67A90301 BYKG 3M, 32A 200-480V, 3-Phase WYE IEC Floor Rack Power Cable 32A IEC plug (3ph+G+N) 1 per PCS
4L67A90172 BYKV 2.8M, 63A 200-480V, 3-Phase WYE IEC to Y-Splitter Rack Power Cable 60A IEC plug (3ph+G+N) 1 of each per 2 PCS units
4L67A97427 C4KW 0.95M, 63A 200-480V, 3-Phase Y-Splitter Floor Power Cable 60A IEC plug (3ph+G+N), use with BYKV
Power cables routed from above the rack cabinet
4L67A90304 BYKC 3M, 32A 200-480V, 3-Phase Delta IEC Top Rack Power Cable 32A IEC plug (3ph+G+N) 1 per PCS
4L67A90302 BYKE 3M, 32A 200-480V, 3-Phase WYE IEC Top Rack Power Cable 32A IEC plug (3ph+G+N) 1 per PCS
4L67A90172 BYKV 2.8M, 63A 200-480V, 3-Phase WYE IEC to Y-Splitter Rack Power Cable 60A IEC plug (3ph+G+N) 1 of each per 2 PCS units
4L67A90284 BYKF 0.95M, 63A 200-480V, 3-Phase Y-Splitter Top Power Cable 60A IEC plug (3ph+G+N), use with BYKV

System Management

The SC750 V4 contains an integrated service processor, XClarity Controller 3 (XCC3), which provides advanced control, monitoring, and alerting functions. The XCC3 is based on the AST2600 baseboard management controller (BMC) using a dual-core ARM Cortex A7 32-bit RISC service processor running at 1.2 GHz.

Topics in this section:

System I/O Board

The SC750 V4 implements a separate System I/O Board that connects to the system board as shown in the Internal view in the Components and connectors section. The System I/O Board contains all the connectors visible at the rear of the server as shown in the following figure.

Note: The NMI (non-maskable interrupt) button is not accessible from the rear of the server. Lenovo recommends using the NMI function that is part of the XCC user interfaces instead.

System I/O Board
Figure 14. System I/O Board

The board also has the following components:

  • XClarity Controller 3, implemented using the ASPEED AST2600 baseboard management controller (BMC).
  • Root of Trust (RoT) module - implements Platform Firmware Resiliency (PFR) hardware Root of Trust (RoT) which enables the server to be NIST SP800-193 compliant. For more details about PFR, see the Security section.
  • MicroSD card port to enable the use of a MicroSD card for additional storage for use with the XCC3 controller. XCC3 can use the storage as a Remote Disc on Card (RDOC) device (up to 4GB of storage). It can also be used to store firmware updates (including N-1 firmware history) for ease of deployment.

    Tip: Without a MicroSD card installed, the XCC controller will have 100MB of available RDOC storage.

Ordering information for the supported Micro SD cards are listed in the MicroSD for XCC local storage section.

Local management

The following figure shows the ports and LEDs on the front of the SC750 V4.

SC750 V4 ports for local management
Figure 15. SC750 V4 Front operator panel

The LEDs are as follows:

  • Identification (ID) LED (blue): Activated remotely via XCC to identify a specific server to local service engineers
  • Error LED (yellow): Indicates if there is a system error. The error is reported in the XCC system log.
  • Root of Trust fault LED (yellow): Indicates if there is an error with the Root of Trust (RoT) module

Information pull-out tab

The front of the server also houses an information pull-out tab (also known as the network access tag). See Figure 2 for the location. A label on the tab shows the network information (MAC address and other data) to remotely access the service processor.

External Diagnostics Handset

The SC750 V4 has a port to connect an External Diagnostics Handset as shown in the following figure.

The External Diagnostics Handset allows quick access to system status, firmware, network, and health information. The LCD display on the panel and the function buttons give you access to the following information:

  • Active alerts
  • Status Dashboard
  • System VPD: machine type & mode, serial number, UUID string
  • System firmware levels: UEFI and XCC firmware
  • XCC network information: hostname, MAC address, IP address, DNS addresses
  • Environmental data: Ambient temperature, CPU temperature, AC input voltage, estimated power consumption
  • Active XCC sessions
  • System reset action

The handset has a magnet on the back of it to allow you to easily mount it on a convenient place on any rack cabinet.

SC750 V4 External Diagnostics Handset
Figure 16. SC750 V4 External Diagnostics Handset

Ordering information for the External Diagnostics Handset with is listed in the following table.

Table 18. External Diagnostics Handset ordering information
Part number Feature code Description
4TA7A64874 1410 BEUX ThinkSystem External Diagnostics Handset

System status with XClarity Mobile

The XClarity Mobile app includes a tethering function where you can connect your Android or iOS device to the server via USB to see the status of the server.

The steps to connect the mobile device are as follows:

  1. Enable USB Management on the server, by holding down the ID button for 3 seconds (or pressing the dedicated USB management button if one is present)
  2. Connect the mobile device via a USB cable to the server's USB port with the management symbol USB Management symbol
  3. In iOS or Android settings, enable Personal Hotspot or USB Tethering
  4. Launch the Lenovo XClarity Mobile app

Once connected you can see the following information:

  • Server status including error logs (read only, no login required)
  • Server management functions (XClarity login credentials required)

Remote management

The 1Gb onboard port and one of the 25Gb onboard ports (port 1) on the front of the SC750 V4 offer a connection to the XCC for remote management. This shared-NIC functionality allows the ports to be used both for operating system networking and for remote management.

Remote server management is provided through industry-standard interfaces:

  • Intelligent Platform Management Interface (IPMI) Version 2.0
  • Simple Network Management Protocol (SNMP) Version 3 (no SET commands; no SNMP v1)
  • Common Information Model (CIM-XML)
  • Representational State Transfer (REST) support
  • Redfish support (DMTF compliant)
  • Web browser - HTML 5-based browser interface (Java and ActiveX not required) using a responsive design (content optimized for device being used - laptop, tablet, phone) with NLS support

The 1Gb port and 25Gb Port 1 support NC-SI. You can enable NC-SI in the factory using the feature codes listed in the following table. If neither feature code is selected, both ports will have NC-SI disabled.

Table 19. Enabling NC-SI on the embedded network ports
Feature code Description
BEXY ThinkSystem NC-SI enabled on SFP28 Port (Port 1)
BEXZ ThinkSystem NC-SI enabled on RJ45 Port

IPMI via the Ethernet port (IPMI over LAN) is supported, however it is disabled by default. For CTO orders you can specify whether you want to the feature enabled or disabled in the factory, using the feature codes listed in the following table.

Table 20. IPMI-over-LAN settings
Feature code Description
B7XZ Disable IPMI-over-LAN (default)
B7Y0 Enable IPMI-over-LAN

XCC3 Premier

The XCC3 service processor in the SC750 V4 supports an upgrade to the Premier level of features. XCC3 Premier in ThinkSystem V4 servers is equivalent to the XCC2 Premium offering in ThinkSystem V3 servers.

XCC3 Premier adds the following functions:

  • Enterprise Strict Security mode - Enforces CNSA 1.0 level security
  • Remotely viewing video with graphics resolutions up to 1600x1200 at 75 Hz with up to 23 bits per pixel, regardless of the system state
  • Remotely accessing the server using the keyboard and mouse from a remote client
  • International keyboard mapping support
  • Redirecting serial console via SSH
  • Component replacement log (Maintenance History log)
  • Access restriction (IP address blocking)
  • Displaying graphics for real-time and historical power usage data and temperature
  • Mapping the ISO and image files located on the local client as virtual drives for use by the server
  • Mounting the remote ISO and image files via HTTPS, SFTP, CIFS, and NFS
  • Power capping
  • License for XClarity Energy Manager

The following additional XCC3 Premier features are planned for 1Q/2025

  • System Guard - Monitor hardware inventory for unexpected component changes, and simply log the event or prevent booting
  • Neighbor Group - Enables administrators to manage and synchronize configurations and firmware level across multiple servers
  • Syslog alerting
  • Lenovo SED security key management
  • Boot video capture and crash video capture
  • Virtual console collaboration - Ability for up to 6 remote users to be log into the remote session simultaneously
  • Remote console Java client
  • System utilization data and graphic view
  • Single sign on with Lenovo XClarity Administrator
  • Update firmware from a repository

Ordering information is listed in the following table. XCC3 Premier is a software license upgrade - no additional hardware is required.

Table 21. XCC3 Premier license upgrade
Part number Feature code Description
7S0X000XWW SCY0 Lenovo XClarity Controller 3 (XCC3) Premier

With XCC3 Premier, for CTO orders, you can request that System Guard be enabled in the factory and the first configuration snapshot be recorded. To add this to an order, select feature code listed in the following table. The selection is made in the Security tab of the DCSC configurator.

Table 22. Enable System Guard in the factory (CTO orders)
Feature code Description
BUT2 Install System Guard

For more information about System Guard, see https://pubs.lenovo.com/xcc2/NN1ia_c_systemguard

Remote management using the SMM3

The N1380 enclosure includes a System Management Module 3 (SMM), installed in the rear of the enclosure. See Enclosure rear view for the location of the SMM. The SMM provides remote management of both the enclosure and the individual servers installed in the enclosure. The SMM can be accessed through a web browser interface and via Intelligent Platform Management Interface (IPMI) 2.0 commands.

The SMM provides the following functions:

  • Remote connectivity to XCC controllers in each node in the enclosure
  • Node-level reporting and control (for example, node virtual reseat/reset)
  • Enclosure power management
  • Enclosure thermal management
  • Enclosure inventory

The following figure shows the LEDs and connectors of the SMM.

System management module in the N1380 enclosure
Figure 17. System management module in the N1380 enclosure

The SMM has the following ports and LEDs:

  • 2x Gigabit Ethernet RJ45 ports for remote management access
  • USB port and activation button for service
  • SMM reset button
  • System error LED (yellow)
  • Identification (ID) LED (blue)
  • Status LED (green)
  • System power LED (green)

The USB service button and USB service port are used to gather service data in the event of an error. Pressing the service button copies First Failure Data Collection (FFDC) data to a USB key installed in the USB service port. The reset button is used to perform an SMM reset (short press) or to restore the SMM back to factory defaults (press for 4+ seconds).

The use of two RJ45 Ethernet ports enables the ability to daisy-chain the Ethernet management connections thereby reducing the number of ports you need in your management switches and reducing the overall cable density needed for systems management. With this feature you can connect the first SMM to your management network and the SMM in a second enclosure connects to the first SMM. The SMM in the third enclosure can then connect to the SMM in the second enclosure.

Up to 3 enclosures can be connected in a daisy-chain configuration and all 48 servers in those enclosures can be managed remotely via one single Ethernet connection.

Notes:

  • If you are using IEEE 802.1D spanning tree protocol (STP) then at most 3 enclosures can be connected together
  • Do not form a loop with the network cabling. The dual-port SMM at the end of the chain should not be connected back to the switch that is connected to the top of the SMM chain.

For more information about the SMM, see the SMM3 User's Guide:
https://pubs.lenovo.com/mgt_tools_smm3/

Lenovo HPC & AI Software Stack

The Lenovo HPC & AI Software Stack combines open-source with proprietary best-of-breed Supercomputing software to provide the most consumable open-source HPC software stack embraced by all Lenovo HPC customers.

It provides a fully tested and supported, complete but customizable HPC software stack to enable the administrators and users in optimally and environmentally sustainable utilizing their Lenovo Supercomputers.

The Lenovo HPC & AI Software Stack is built on the most widely adopted and maintained HPC community software for orchestration and management. It integrates third party components especially around programming environments and performance optimization to complement and enhance the capabilities, creating the organic umbrella in software and service to add value for our customers.

The key open-source components of the software stack are as follows:

  • Confluent Management

    Confluent is Lenovo-developed open-source software designed to discover, provision, and manage HPC clusters and the nodes that comprise them. Confluent provides powerful tooling to deploy and update software and firmware to multiple nodes simultaneously, with simple and readable modern software syntax.

  • SLURM Orchestration

    Slurm is integrated as an open source, flexible, and modern choice to manage complex workloads for faster processing and optimal utilization of the large-scale and specialized high-performance and AI resource capabilities needed per workload provided by Lenovo systems. Lenovo provides support in partnership with SchedMD.

  • LiCO Webportal

    Lenovo Intelligent Computing Orchestration (LiCO) is a Lenovo-developed consolidated Graphical User Interface (GUI) for monitoring, managing and using cluster resources. The webportal provides workflows for both AI and HPC, and supports multiple AI frameworks, including TensorFlow, Caffe, Neon, and MXNet, allowing you to leverage a single cluster for diverse workload requirements.

  • Energy Aware Runtime

    EAR is a powerful European open-source energy management suite supporting anything from monitoring over power capping to live-optimization during the application runtime. Lenovo is collaborating with Barcelona Supercomputing Centre (BSC) and EAS4DC on the continuous development and support and offers three versions with differentiating capabilities.

For more information and ordering information, see the Lenovo HPC & AI Software Stack product guide:
https://lenovopress.com/lp1651

Lenovo XClarity Provisioning Manager

Lenovo XClarity Provisioning Manager (LXPM) is a UEFI-based application embedded in ThinkSystem servers and accessible via the F1 key during system boot.

LXPM provides the following functions:

  • Graphical UEFI Setup
  • System inventory information and VPD update
  • System firmware updates (UEFI and XCC)
  • RAID setup wizard
  • OS installation wizard (including unattended OS installation)
  • Diagnostics functions

Lenovo XClarity Essentials

Lenovo offers the following XClarity Essentials software tools that can help you set up, use, and maintain the server at no additional cost:

  • Lenovo Essentials OneCLI

    OneCLI is a collection of server management tools that uses a command line interface program to manage firmware, hardware, and operating systems. It provides functions to collect full system health information (including health status), configure system settings, and update system firmware and drivers.

  • Lenovo Essentials UpdateXpress

    The UpdateXpress tool is a standalone GUI application for firmware and device driver updates that enables you to maintain your server firmware and device drivers up-to-date and help you avoid unnecessary server outages. The tool acquires and deploys individual updates and UpdateXpress System Packs (UXSPs) which are integration-tested bundles.

  • Lenovo Essentials Bootable Media Creator

    The Bootable Media Creator (BOMC) tool is used to create bootable media for offline firmware update.

For more information and downloads, visit the Lenovo XClarity Essentials web page:
http://support.lenovo.com/us/en/documents/LNVO-center

Lenovo XClarity Administrator

Lenovo XClarity Administrator is a centralized resource management solution designed to reduce complexity, speed response, and enhance the availability of Lenovo systems and solutions. It provides agent-free hardware management for ThinkSystem servers, in addition to ThinkServer, System x, and Flex System servers. The administration dashboard is based on HTML 5 and allows fast location of resources so tasks can be run quickly.

Because Lenovo XClarity Administrator does not require any agent software to be installed on the managed endpoints, there are no CPU cycles spent on agent execution, and no memory is used, which means that up to 1GB of RAM and 1 - 2% CPU usage is saved, compared to a typical managed system where an agent is required.

Lenovo XClarity Administrator is an optional software component for the SC750 V4. The software can be downloaded and used at no charge to discover and monitor the SC750 V4 and to manage firmware upgrades.

If software support is required for Lenovo XClarity Administrator, or premium features such as configuration management and operating system deployment are required, Lenovo XClarity Pro software subscription should be ordered. Lenovo XClarity Pro is licensed on a per managed system basis, that is, each managed Lenovo system requires a license.

The following table lists the Lenovo XClarity software license options.

Table 23. Lenovo XClarity Pro ordering information
Part number Feature code Description
00MT201 1339 Lenovo XClarity Pro, per Managed Endpoint w/1 Yr SW S&S
00MT202 1340 Lenovo XClarity Pro, per Managed Endpoint w/3 Yr SW S&S
00MT203 1341 Lenovo XClarity Pro, per Managed Endpoint w/5 Yr SW S&S
7S0X000HWW SAYV Lenovo XClarity Pro, per Managed Endpoint w/6 Yr SW S&S
7S0X000JWW SAYW Lenovo XClarity Pro, per Managed Endpoint w/7 Yr SW S&S

Lenovo XClarity Administrator offers the following standard features that are available at no charge:

  • Auto-discovery and monitoring of Lenovo systems
  • Firmware updates and compliance enforcement
  • External alerts and notifications via SNMP traps, syslog remote logging, and e-mail
  • Secure connections to managed endpoints
  • NIST 800-131A or FIPS 140-2 compliant cryptographic standards between the management solution and managed endpoints
  • Integration into existing higher-level management systems such as cloud automation and orchestration tools through REST APIs, providing extensive external visibility and control over hardware resources
  • An intuitive, easy-to-use GUI
  • Scripting with Windows PowerShell, providing command-line visibility and control over hardware resources

Lenovo XClarity Administrator offers the following premium features that require an optional Pro license:

  • Pattern-based configuration management that allows to define configurations once and apply repeatedly without errors when deploying new servers or redeploying existing servers without disrupting the fabric
  • Bare-metal deployment of operating systems and hypervisors to streamline infrastructure provisioning

For more information, refer to the Lenovo XClarity Administrator Product Guide:
http://lenovopress.com/tips1200

Lenovo XClarity Integrators

Lenovo also offers software plug-in modules, Lenovo XClarity Integrators, to manage physical infrastructure from leading external virtualization management software tools including those from Microsoft and VMware.

These integrators are offered at no charge, however if software support is required, a Lenovo XClarity Pro software subscription license should be ordered.

Lenovo XClarity Integrators offer the following additional features:

  • Ability to discover, manage, and monitor Lenovo server hardware from VMware vCenter or Microsoft System Center
  • Deployment of firmware updates and configuration patterns to Lenovo x86 rack servers and Flex System from the virtualization management tool
  • Non-disruptive server maintenance in clustered environments that reduces workload downtime by dynamically migrating workloads from affected hosts during rolling server updates or reboots
  • Greater service level uptime and assurance in clustered environments during unplanned hardware events by dynamically triggering workload migration from impacted hosts when impending hardware failures are predicted

For more information about all the available Lenovo XClarity Integrators, see the Lenovo XClarity Administrator Product Guide: https://lenovopress.com/tips1200-lenovo-xclarity-administrator

Lenovo XClarity Energy Manager

Lenovo XClarity Energy Manager (LXEM) is a power and temperature management solution for data centers. It is an agent-free, web-based console that enables you to monitor and manage power consumption and temperature in your data center through the management console. It enables server density and data center capacity to be increased through the use of power capping.

LXEM is a licensed product. A single-node LXEM license is included with the XClarity Controller Premier upgrade as described in the XCC3 Premier section. If your server does not have the XCC Premier upgrade, Energy Manager licenses can be ordered as shown in the following table.

Table 24. Lenovo XClarity Energy Manager
Part number Description
4L40E51621 Lenovo XClarity Energy Manager Node License (1 license needed per server)

For more information about XClarity Energy Manager, see the following resources:

Lenovo Capacity Planner

Lenovo Capacity Planner is a power consumption evaluation tool that enhances data center planning by enabling IT administrators and pre-sales professionals to understand various power characteristics of racks, servers, and other devices. Capacity Planner can dynamically calculate the power consumption, current, British Thermal Unit (BTU), and volt-ampere (VA) rating at the rack level, improving the planning efficiency for large scale deployments.

For more information, refer to the Capacity Planner web page:
http://datacentersupport.lenovo.com/us/en/solutions/lnvo-lcp

Security

Topics in this section:

Security features

The server offers the following electronic security features:

  • Secure Boot function of the Intel Xeon processor
  • Support for Platform Firmware Resiliency (PFR) hardware Root of Trust (RoT) - see the Platform Firmware Resiliency section
  • Firmware signature processes compliant with FIPS and NIST requirements
  • System Guard (part of XCC3 Premier) - Proactive monitoring of hardware inventory for unexpected component changes
  • Administrator and power-on password
  • Integrated Trusted Platform Module (TPM) supporting TPM 2.0
  • For China users, optional Nationz TPM 2.0 module
  • Self-encrypting drives (SEDs) with support for enterprise key managers - see the SED encryption key management section

The server is NIST SP 800-147B compliant.

The following table lists the security options for the SC750 V4.

Table 25. Security features
Part number Feature code Description
CTO only C1YL ThinkSystem V4 PRC NationZ TPM 2.0 Module (China customers only)

Platform Firmware Resiliency - Lenovo ThinkShield

Lenovo's ThinkShield Security is a transparent and comprehensive approach to security that extends to all dimensions of our data center products: from development, to supply chain, and through the entire product lifecycle.

The ThinkSystem SC750 V4 includes Platform Firmware Resiliency (PFR) hardware Root of Trust (RoT) which enables the system to be NIST SP800-193 compliant. This offering further enhances key platform subsystem protections against unauthorized firmware updates and corruption, to restore firmware to an integral state, and to closely monitor firmware for possible compromise from cyber-attacks.

PFR operates upon the following server components:

  • UEFI image – the low-level server firmware that connects the operating system to the server hardware
  • XCC image – the management “engine” software that controls and reports on the server status separate from the server operating system
  • FPGA image – the code that runs the server’s lowest level hardware controller on the motherboard

The Lenovo Platform Root of Trust Hardware performs the following three main functions:

  • Detection – Measures the firmware and updates for authenticity
  • Recovery – Recovers a corrupted image to a known-safe image
  • Protection – Monitors the system to ensure the known-good firmware is not maliciously written

These enhanced protection capabilities are implemented using a dedicated, discrete security processor whose implementation has been rigorously validated by leading third-party security firms. Security evaluation results and design details are available for customer review – providing unprecedented transparency and assurance.

The SC750 V4 includes support for Secure Boot, a UEFI firmware security feature developed by the UEFI Consortium that ensures only immutable and signed software are loaded during the boot time. The use of Secure Boot helps prevent malicious code from being loaded and helps prevent attacks, such as the installation of rootkits. Lenovo offers the capability to enable secure boot in the factory, to ensure end-to-end protection. Alternatively, Secure Boot can be left disabled in the factory, allowing the customer to enable it themselves at a later point, if desired.

The following table lists the relevant feature code(s).

Table 26. Secure Boot options
Part number Feature code Description Purpose
CTO only BPKQ TPM 2.0 with Secure Boot Configure the system in the factory with Secure Boot enabled.
CTO only BPKR TPM 2.0 Configure the system without Secure Boot enabled. Customers can enable Secure Boot later if desired.

Tip: If Secure Boot is not enabled in the factory, it can be enabled later by the customer. However once Secure Boot is enabled, it cannot be disabled.

Intel Transparent Supply Chain

Add a layer of protection in your data center and have peace of mind that the server hardware you bring into it is safe authentic and with documented, testable, and provable origin.

Lenovo has one of the world’s best supply chains, as ranked by Gartner Group, backed by extensive and mature supply chain security programs that exceed industry norms and US Government standards. Now we are the first Tier 1 manufacturer to offer Intel® Transparent Supply Chain in partnership with Intel, offering you an unprecedented degree of supply chain transparency and assurance.

To enable Intel Transparent Supply Chain for the Intel-based servers in your order, add the following feature code in the DCSC configurator, under the Security tab.

Table 27. Intel Transparent Supply Chain ordering information
Feature code Description
C4M7 Intel Transparent Supply Chain

For more information on this offering, see the paper Introduction to Intel Transparent Supply Chain on Lenovo ThinkSystem Servers, available from https://lenovopress.com/lp1434-introduction-to-intel-transparent-supply-chain-on-thinksystem-servers.

Security standards

The SC750 V4 supports the following security standards and capabilities:

  • Industry Standard Security Capabilities
    • Intel CPU Enablement
      • Intel Trust Domain Extensions (Intel TDX)
      • Intel Crypto Acceleration
      • Intel QuickAssist Software Acceleration
      • Intel Platform Firmware Resilience Support
      • Intel Control-Flow Enforcement Technology
      • Intel Total Memory Encryption - Multi Key
      • Intel Total Memory Encryption
      • Intel AES New Instructions (AES-NI)
      • Intel OS Guard
      • Execute Disable Bit (XD)
      • Intel Boot Guard
      • Mode-based Execute Control (MBEC)
      • Intel Virtualization Technology (VT-x)
      • Intel Virtualization Technology for Directed I/O (VT-d)
    • Microsoft Windows Security Enablement
      • Credential Guard
      • Device Guard
      • Host Guardian Service
    • TCG (Trusted Computing Group) TPM (Trusted Platform Module) 2.0
    • UEFI (Unified Extensible Firmware Interface) Forum Secure Boot
  • Hardware Root of Trust and Security
    • Independent security subsystem providing platform-wide NIST SP800-193 compliant Platform Firmware Resilience (PFR)
    • Management domain RoT supplemented by the Secure Boot features of XCC
  • Platform Security
    • Boot and run-time firmware integrity monitoring with rollback to known-good firmware (e.g., “self-healing”)
    • Non-volatile storage bus security monitoring and filtering
    • Resilient firmware implementation, such as to detect and defeat unauthorized flash writes or SMM (System Management Mode) memory incursions
    • Patented IPMI KCS channel privileged access authorization (USPTO Patent# 11,256,810)
    • Host and management domain authorization, including integration with CyberArk for enterprise password management
    • KMIP (Key Management Interoperability Protocol) compliant, including support for IBM SKLM and Thales KeySecure
    • Reduced “out of box” attack surface
    • Configurable network services

    For more information on platform security, see the paper “How to Harden the Security of your ThinkSystem Server and Management Applications” available from https://lenovopress.com/lp1260-how-to-harden-the-security-of-your-thinksystem-server.

  • Standards Compliance and/or Support
    • NIST SP800-131A rev 2 “Transitioning the Use of Cryptographic Algorithms and Key Lengths”
    • NIST SP800-147B “BIOS Protection Guidelines for Servers”
    • NIST SP800-193 “Platform Firmware Resiliency Guidelines”
    • ISO/IEC 11889 “Trusted Platform Module Library”
    • Common Criteria TCG Protection Profile for “PC Client Specific TPM 2.0”
    • European Union Commission Regulation 2019/424 (“ErP Lot 9”) “Ecodesign Requirements for Servers and Data Storage Products” Secure Data Deletion
    • Optional FIPS 140-2 validated Self-Encrypting Disks (SEDs) with external KMIP-based key management
  • Product and Supply Chain Security
    • Suppliers validated through Lenovo’s Trusted Supplier Program
    • Developed in accordance with Lenovo’s Secure Development Lifecycle (LSDL)
    • Continuous firmware security validation through automated testing, including static code analysis, dynamic network and web vulnerability testing, software composition analysis, and subsystem-specific testing, such as UEFI security configuration validation
    • Ongoing security reviews by US-based security experts, with attestation letters available from our third-party security partners
    • Digitally signed firmware, stored and built on US-based infrastructure and signed on US-based Hardware Security Modules (HSMs)
    • Manufacturing transparency via Intel Transparent Supply Chain (for details, see https://lenovopress.com/lp1434-introduction-to-intel-transparent-supply-chain-on-lenovo-thinksystem-servers)
    • TAA (Trade Agreements Act) compliant manufacturing, by default in Mexico for North American markets with additional US and EU manufacturing options
    • US 2019 NDAA (National Defense Authorization Act) Section 889 compliant

Operating system support

  • Red Hat Enterprise Linux 9.5
  • SUSE Linux Enterprise Server 15 SP6
  • Ubuntu 22.04 LTS 64-bit
  • Ubuntu 24.04 LTS 64-bit

The server is also certified or tested with the following operating systems:

  • Rocky Linux
  • AlmaLinux

See Operating System Interoperability Guide (OSIG) for the complete list of supported, certified, and tested operating systems, including version and point releases:
https://lenovopress.lenovo.com/osig#servers=sc750-v4-7ddj

Also review the latest LeSI Best Recipe to see the operating systems that are supported via Lenovo Scalable Infrastructure (LeSI):
https://support.lenovo.com/us/en/solutions/HT505184#5

Physical and electrical specifications

Eight SC750 V4 server trays are installed in the N1380 enclosure. Each SC750 V4 tray has the following dimensions:

  • Width: 546 mm (21.5 inches)
  • Height: 53 mm (2.1 inches)
  • Depth: 760 mm (29.9 inches) (799 mm, including the water connections at the rear of the server)

The N1380 enclosure has the following overall physical dimensions, excluding components that extend outside the standard chassis, such as EIA flanges, front security bezel (if any), and power conversion station handles:

  • Width: 540 mm (21.3 inches)
  • Height: 572 mm (22.5 inches)
  • Depth: 1302 mm (51.2 inches)

The following table lists the detailed dimensions. See the figure below for the definition of each dimension.

Table 28. Detailed dimensions
Dimension Description
483 mm Xa = Width, to the outsides of the front EIA flanges
448 mm Xb = Width, to the rack rail mating surfaces
540 mm Xc = Width, to the outer most chassis body feature
572 mm Ya = Height, from the bottom of chassis to the top of the chassis
1062 mm Za = Depth, from the rack flange mating surface to the rearmost I/O port surface
1076 mm Zb = Depth, from the rack flange mating surface to the rearmost feature of the chassis body
1114 mm Zc = Depth, from the rack flange mating surface to the rearmost feature such as power supply handle
226 mm Zd = Depth, from the forwardmost feature on front of EIA flange to the rack flange mating surface
226 mm Ze = Depth, from the front of security bezel (if applicable) or forwardmost feature to the rack flange mating surface

Enclosure dimensions
Figure 18. Enclosure dimensions

The shipping (cardboard packaging) dimensions of the SC750 V4 are as follows:

  • Width: 800 mm (31.5 inches)
  • Height: 260 mm (10.2 inches)
  • Depth: 1200 mm (47.2 inches)

The shipping (cardboard packaging) dimensions of the N1380 are as follows:

  • Width: 800 mm (31.5 inches)
  • Height: 1027 mm (40.4 inches)
  • Depth: 1600 mm (63.0 inches)

The SC750 V4 tray has the following maximum weight:

  • 37.2 kg (82 lbs)

The N1380 enclosure has the following weight:

  • Empty enclosure (with midplane and cables): 94 kg (208 lbs)
  • Fully configured enclosure with 4x water-cooled power conversion stations and 8x SC750 V4 server trays (16 nodes): 484.5 kg (1069 lbs)

The enclosure has the following electrical specifications for input power conversion stations:

  • Input voltage: 180-528Vac (nominal 200-480Vac), 3-phase, WYE or Delta
  • Input current (each power conversion station):
    • Delta: 46.53A (34A per phase)
    • WYE: 23.79A (34A per phase)

Operating environment

The SC750 V4 server trays and N1380 enclosure are supported in the following environment.

Topics in this section:

For more information, see the following documentation page:
https://pubs.lenovo.com/sc750-v4/server_specifications_environmental

Air temperature and humidity

Air temperature/humidity requirements:

  • Operating: ASHRAE A2: 10°C to 35°C (50°F to 95°F); when the altitude exceeds 900 m (2953 ft), the maximum ambient temperature value decreases by 1°C (1.8°F) with every 300 m (984 ft) of altitude increase.
  • Powered off: 5°C to 45°C (41°F to 113°F)
  • Shipping/storage: -40°C to 60°C (-40°F to 140°F)
  • Operating humidity: ASHRAE Class A2: 8% - 80%, maximum dew point : 21°C (70°F)
  • Shipment/storage humidity: 8% - 90%

Altitude:

  • Maximum altitude: 3048 m (10 000 ft)

Particulate contamination

Airborne particulates (including metal flakes or particles) and reactive gases acting alone or in combination with other environmental factors such as humidity or temperature might damage the system that might cause the system to malfunction or stop working altogether.

The following specifications indicate the limits of particulates that the system can tolerate:

  • Reactive gases:
    • The copper reactivity level shall be less than 200 Angstroms per month (Å/month)
    • The silver reactivity level shall be less than 200 Å/month
  • Airborne particulates:
    • The room air should be continuously filtered with MERV 8 filters.
    • Air entering a data center should be filtered with MERV 11 or preferably MERV 13 filters.
    • The deliquescent relative humidity of the particulate contamination should be more than 60% RH
    • Environment must be free of zinc whiskers

For additional information, see the Specifications section of the documentation for the server, available from the Lenovo Documents site, https://pubs.lenovo.com/

Regulatory compliance

The SC750 V4 conforms to the following standards:

  • ANSI/UL 62368-1
  • Morocco CMIM Certification (CM)
  • Russia, Belorussia and Kazakhstan, TP EAC 037/2016 (for RoHS)
  • CE, UKCA Mark (EN55032 Class A, EN62368-1, EN55035, EN61000-3-11, EN61000-3-12, (EU) 2019/424, and EN IEC 63000 (RoHS))
  • FCC - Verified to comply with Part 15 of the FCC Rules, Class A
  • Canada ICES-003, issue 7, Class A
  • CISPR 32, Class A, CISPR 35
  • Korea KS C 9832 Class A, KS C 9835
  • Japan VCCI, Class A
  • Taiwan BSMI CNS15936, Class A; Section 5 of CNS15663
  • Australia/New Zealand AS/NZS CISPR 32, Class A; AS/NZS 62368.1
  • SGS, VOC Emission
  • Japanese Energy-Saving Act
  • EU2019/424 Energy Related Product (ErP Lot9)
  • China CELP certificate, HJ 2507-2011

The N1380 conforms to the following standards:

  • ANSI/UL 62368-1
  • Morocco CMIM Certification (CM)
  • Russia, Belorussia and Kazakhstan, TP EAC 037/2016 (for RoHS)
  • Russia, Belorussia and Kazakhstan, EAC: TP TC 004/2011 (for Safety); TP TC 020/2011 (for EMC)
  • CE, UKCA Mark (EN55032 Class A, EN62368-1, EN55035, EN61000-3-11, EN61000-3-12, (EU) 2019/424, and EN IEC 63000 (RoHS))
  • FCC - Verified to comply with Part 15 of the FCC Rules, Class A
  • Canada ICES-003, issue 7, Class A
  • CISPR 32, Class A, CISPR 35
  • Korea KS C 9832 Class A, KS C 9835
  • Japan VCCI, Class A
  • Australia/New Zealand AS/NZS CISPR 32, Class A; AS/NZS 62368.1
  • SGS, VOC Emission
  • China CELP certificate, HJ 2507-2011

Warranty and Support

The server and enclosure have the following warranty:

  • Lenovo ThinkSystem SC750 V4 (7DDJ) - 3-year warranty
  • Lenovo ThinkSystem N1380 Enclosure (7DDH) - 3-year warranty
  • Genie Lift GL-8 Material Lift (7D5Y) - 3-year warranty though the vendor (Genie)

The standard warranty terms are customer-replaceable unit (CRU) and onsite (for field-replaceable units FRUs only) with standard call center support during normal business hours and 9x5 Next Business Day Parts Delivered.

Lenovo’s additional support services provide a sophisticated, unified support structure for your data center, with an experience consistently ranked number one in customer satisfaction worldwide. Available offerings include:

  • Premier Support

    Premier Support provides a Lenovo-owned customer experience and delivers direct access to technicians skilled in hardware, software, and advanced troubleshooting, in addition to the following:

    • Direct technician-to-technician access through a dedicated phone line
    • 24x7x365 remote support
    • Single point of contact service
    • End to end case management
    • Third-party collaborative software support
    • Online case tools and live chat support
    • On-demand remote system analysis
  • Warranty Upgrade (Preconfigured Support)

    Services are available to meet the on-site response time targets that match the criticality of your systems.

    • 3, 4, or 5 years of service coverage
    • 1-year or 2-year post-warranty extensions
    • Foundation Service: 9x5 service coverage with next business day onsite response. YourDrive YourData is an optional extra (see below).
    • Essential Service: 24x7 service coverage with 4-hour onsite response or 24-hour committed repair (available only in select markets). Bundled with YourDrive YourData.
    • Advanced Service: 24x7 service coverage with 2-hour onsite response or 6-hour committed repair (available only in select markets). Bundled with YourDrive YourData.
  • Managed Services

    Lenovo Managed Services provides continuous 24x7 remote monitoring (plus 24x7 call center availability) and proactive management of your data center using state-of-the-art tools, systems, and practices by a team of highly skilled and experienced Lenovo services professionals.

    Quarterly reviews check error logs, verify firmware & OS device driver levels, and software as needed. We’ll also maintain records of latest patches, critical updates, and firmware levels, to ensure you systems are providing business value through optimized performance.

  • Technical Account Management (TAM)

    A Lenovo Technical Account Manager helps you optimize the operation of your data center based on a deep understanding of your business. You gain direct access to your Lenovo TAM, who serves as your single point of contact to expedite service requests, provide status updates, and furnish reports to track incidents over time. In addition, your TAM will help proactively make service recommendations and manage your service relationship with Lenovo to make certain your needs are met.

  • Enterprise Server Software Support

    Enterprise Software Support is an additional support service providing customers with software support on Microsoft, Red Hat, SUSE, and VMware applications and systems. Around the clock availability for critical problems plus unlimited calls and incidents helps customers address challenges fast, without incremental costs. Support staff can answer troubleshooting and diagnostic questions, address product comparability and interoperability issues, isolate causes of problems, report defects to software vendors, and more.

  • YourDrive YourData

    Lenovo’s YourDrive YourData is a multi-drive retention offering that ensures your data is always under your control, regardless of the number of drives that are installed in your Lenovo server. In the unlikely event of a drive failure, you retain possession of your drive while Lenovo replaces the failed drive part. Your data stays safely on your premises, in your hands. The YourDrive YourData service can be purchased in convenient bundles and is optional with Foundation Service. It is bundled with Essential Service and Advanced Service.

  • Health Check

    Having a trusted partner who can perform regular and detailed health checks is central to maintaining efficiency and ensuring that your systems and business are always running at their best. Health Check supports Lenovo-branded server, storage, and networking devices, as well as select Lenovo-supported products from other vendors that are sold by Lenovo or a Lenovo-Authorized Reseller.

Examples of region-specific warranty terms are second or longer business day parts delivery or parts-only base warranty.

If warranty terms and conditions include onsite labor for repair or replacement of parts, Lenovo will dispatch a service technician to the customer site to perform the replacement. Onsite labor under base warranty is limited to labor for replacement of parts that have been determined to be field-replaceable units (FRUs). Parts that are determined to be customer-replaceable units (CRUs) do not include onsite labor under base warranty.

If warranty terms include parts-only base warranty, Lenovo is responsible for delivering only replacement parts that are under base warranty (including FRUs) that will be sent to a requested location for self-service. Parts-only service does not include a service technician being dispatched onsite. Parts must be changed at customer’s own cost and labor and defective parts must be returned following the instructions supplied with the spare parts.

Lenovo Service offerings are region-specific. Not all preconfigured support and upgrade options are available in every region. For information about Lenovo service upgrade offerings that are available in your region, refer to the following resources:

For service definitions, region-specific details, and service limitations, please refer to the following documents:

Services

Lenovo Services is a dedicated partner to your success. Our goal is to reduce your capital outlays, mitigate your IT risks, and accelerate your time to productivity.

Note: Some service options may not be available in all markets or regions. For more information, go to https://www.lenovo.com/services. For information about Lenovo service upgrade offerings that are available in your region, contact your local Lenovo sales representative or business partner.

Here’s a more in-depth look at what we can do for you:

  • Asset Recovery Services

    Asset Recovery Services (ARS) helps customers recover the maximum value from their end-of-life equipment in a cost-effective and secure way. On top of simplifying the transition from old to new equipment, ARS mitigates environmental and data security risks associated with data center equipment disposal. Lenovo ARS is a cash-back solution for equipment based on its remaining market value, yielding maximum value from aging assets and lowering total cost of ownership for your customers. For more information, see the ARS page, https://lenovopress.com/lp1266-reduce-e-waste-and-grow-your-bottom-line-with-lenovo-ars.

  • Assessment Services

    An Assessment helps solve your IT challenges through an onsite, multi-day session with a Lenovo technology expert. We perform a tools-based assessment which provides a comprehensive and thorough review of a company's environment and technology systems. In addition to the technology based functional requirements, the consultant also discusses and records the non-functional business requirements, challenges, and constraints. Assessments help organizations like yours, no matter how large or small, get a better return on your IT investment and overcome challenges in the ever-changing technology landscape.

  • Design Services

    Professional Services consultants perform infrastructure design and implementation planning to support your strategy. The high-level architectures provided by the assessment service are turned into low level designs and wiring diagrams, which are reviewed and approved prior to implementation. The implementation plan will demonstrate an outcome-based proposal to provide business capabilities through infrastructure with a risk-mitigated project plan.

  • Basic Hardware Installation

    Lenovo experts can seamlessly manage the physical installation of your server, storage, or networking hardware. Working at a time convenient for you (business hours or off shift), the technician will unpack and inspect the systems on your site, install options, mount in a rack cabinet, connect to power and network, check and update firmware to the latest levels, verify operation, and dispose of the packaging, allowing your team to focus on other priorities.

  • Deployment Services

    When investing in new IT infrastructures, you need to ensure your business will see quick time to value with little to no disruption. Lenovo deployments are designed by development and engineering teams who know our Products & Solutions better than anyone else, and our technicians own the process from delivery to completion. Lenovo will conduct remote preparation and planning, configure & integrate systems, validate systems, verify and update appliance firmware, train on administrative tasks, and provide post-deployment documentation. Customer’s IT teams leverage our skills to enable IT staff to transform with higher level roles and tasks.

  • Integration, Migration, and Expansion Services

    Move existing physical & virtual workloads easily, or determine technical requirements to support increased workloads while maximizing performance. Includes tuning, validation, and documenting ongoing run processes. Leverage migration assessment planning documents to perform necessary migrations.

  • Data Center Power and Cooling Services

    The Data Center Infrastructure team will provide solution design and implementation services to support the power and cooling needs of the multi-node chassis and multi-rack solutions. This includes designing for various levels of power redundancy and integration into the customer power infrastructure. The Infrastructure team will work with site engineers to design an effective cooling strategy based on facility constraints or customer goals and optimize a cooling solution to ensure high efficiency and availability. The Infrastructure team will provide the detailed solution design and complete integration of the cooling solution into the customer data center. In addition, the Infrastructure team will provide rack and chassis level commissioning and stand-up of the water-cooled solution which includes setting and tuning of the flow rates based on water temperature and heat recovery targets. Lastly, the Infrastructure team will provide cooling solution optimization and performance validation to ensure the highest overall operational efficiency of the solution.

Rack cabinets

The N1380 enclosure is supported in the following racks:

  • Lenovo EveryScale 42U Onyx Heavy Duty Rack Cabinet, model 1410-O42
  • Lenovo EveryScale 42U Pearl Heavy Duty Rack Cabinet, model 1410-P42
  • Lenovo EveryScale 48U Onyx Heavy Duty Rack Cabinet, model 1410-O48
  • Lenovo EveryScale 48U Pearl Heavy Duty Rack Cabinet, model 1410-P48

Considering the weight of the trays in the enclosure, an onsite material lift is required to allow service by a single person. If you do not already have a material lift available, Lenovo offers the Genie Lift GL-8 material lift as configurable option to the rack cabinets. Ordering information is listed in the following table.

Table 29. Genie Lift GL-8 ordering information
Model Description
7D5YCTO1WW Genie Lift GL-8 Material Lift

Configuration rules:

  • For the SC750 V4 tray and N1380 enclosure, the following GL-8 components are required:
    • Foot-release brake, feature BFPW
    • Load Platform, feature BFPV
    • Rotating Fixture, feature TBD
    • Stand Plate, feature TBD
  • The following component are optional:
    • Utility Cart, feature TBD

Lenovo Financial Services

Lenovo Financial Services reinforces Lenovo’s commitment to deliver pioneering products and services that are recognized for their quality, excellence, and trustworthiness. Lenovo Financial Services offers financing solutions and services that complement your technology solution anywhere in the world.

We are dedicated to delivering a positive finance experience for customers like you who want to maximize your purchase power by obtaining the technology you need today, protect against technology obsolescence, and preserve your capital for other uses.

We work with businesses, non-profit organizations, governments and educational institutions to finance their entire technology solution. We focus on making it easy to do business with us. Our highly experienced team of finance professionals operates in a work culture that emphasizes the importance of providing outstanding customer service. Our systems, processes and flexible policies support our goal of providing customers with a positive experience.

We finance your entire solution. Unlike others, we allow you to bundle everything you need from hardware and software to service contracts, installation costs, training fees, and sales tax. If you decide weeks or months later to add to your solution, we can consolidate everything into a single invoice.

Our Premier Client services provide large accounts with special handling services to ensure these complex transactions are serviced properly. As a premier client, you have a dedicated finance specialist who manages your account through its life, from first invoice through asset return or purchase. This specialist develops an in-depth understanding of your invoice and payment requirements. For you, this dedication provides a high-quality, easy, and positive financing experience.

For your region-specific offers, please ask your Lenovo sales representative or your technology provider about the use of Lenovo Financial Services. For more information, see the following Lenovo website:

https://www.lenovo.com/us/en/landingpage/lenovo-financial-services/

Seller training courses

The following sales training courses are offered for employees and partners (login required). Courses are listed in date order.

  1. Family Portfolio: Supercomputing Servers Powered by Intel
    2024-09-27 | 20 minutes | Employees Only
    Details
    Family Portfolio: Supercomputing Servers Powered by Intel

    This course covers the Intel-based Supercomputing server family, which includes the SD650 V3, SD650-N V3, and SC750 V4. After completing this course, you will be able to identify products and features within this Supercomputing family, describe the customer benefits of this product family, and recognize when a specific product or products should be selected.

    Published: 2024-09-27
    Length: 20 minutes

    Start the training:
    Employee link: Grow@Lenovo

    Course code: SXXW2519r2
  2. Q2 Solutions Launch Lenovo ThinkSystem SC750 V4 Quick Hit
    2024-09-17 | 5 minutes | Employees and Partners
    Details
    Q2 Solutions Launch Lenovo ThinkSystem SC750 V4 Quick Hit

    This Quick Hit focuses on the Lenovo ThinkSystem SC750 V4, the first server in the supercomputing class powered by Intel® Xeon® 6 processors. This 2-socket server is designed to meet the demanding needs of HPC environments. It is the first server designed for a new water-cooled supercomputing enclosure.

    Published: 2024-09-17
    Length: 5 minutes

    Start the training:
    Employee link: Grow@Lenovo
    Partner link: Lenovo Partner Learning

    Course code: SXXW2519r2a

Related product families

Product families related to this document are the following:

Trademarks

Lenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both. A current list of Lenovo trademarks is available on the Web at https://www.lenovo.com/us/en/legal/copytrade/.

The following terms are trademarks of Lenovo in the United States, other countries, or both:
Lenovo®
from Exascale to Everyscale®
Neptune®
ServerProven®
System x®
ThinkServer®
ThinkShield®
ThinkSystem®
XClarity®

The following terms are trademarks of other companies:

AMD is a trademark of Advanced Micro Devices, Inc.

Intel® and Xeon® are trademarks of Intel Corporation or its subsidiaries.

Linux® is the trademark of Linus Torvalds in the U.S. and other countries.

Microsoft®, ActiveX®, Hyper-V®, PowerShell, Windows PowerShell®, and Windows® are trademarks of Microsoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.