Live Partition Mobility- IBM Power E1050

With LPM, you can move a running LPAR from one system to another one without disruption. Inactive partition mobility allows you to move a powered-off LPAR from one system to another one.

LPM provides systems management flexibility and improves system availability by avoiding the following situations:

Ê Planned outages for hardware upgrade or firmware maintenance.

Ê Unplanned downtime. With preventive failure management, if a server indicates a potential failure, you can move its LPARs to another server before the failure occurs.

For more information and requirements for LPM, see IBM PowerVM Live Partition Mobility, SG24-7460.

HMC 10.1.1020.0 and VIOS 3.1.3.21 or later provide the following enhancements to the LPM feature:

Ê Automatically choose fastest network for LPM memory transfer.

Ê Allow LPM when a virtual optical device is assigned to a partition.

5.1.5 Active Memory Mirroring

Active Memory Mirroring (AMM) for Hypervisor is available as an option (#EM8G) to enhance resilience by mirroring critical memory that is used by the PowerVM hypervisor so that it can continue operating in a memory failure.

A portion of available memory can be proactively partitioned such that a duplicate set can be used on non-correctable memory errors. This partition can be implemented at the granularity of DIMMs or logical memory blocks.

5.1.6 Remote Restart

Remote Restart is a high availability (HA) option for partitions. If an error occurs that causes a server outage, a partition that is configured for Remote Restart can be restarted on a different physical server. At times, it might take longer to start the server, in which case the Remote Restart function can be used for faster reprovisioning of the partition. Typically, this task can be done faster than restarting the server that stopped and then restarting the partitions. The Remote Restart function relies on technology that is like LPM, where a partition is configured with storage on a SAN that is shared (accessible) by the server that hosts the partition.

HMC 10R1 provides an enhancement to the Remote Restart feature that enables remote restart when a virtual optical device is assigned to a partition.

5.1.7 IBM Power processor modes

Although they are not virtualization features, the IBM Power processor modes are described here because they affect various virtualization features.

On IBM Power servers, partitions can be configured to run in several modes, including the following modes:

Ê Power8

This native mode for Power8 processors implements version 2.07 of the IBM Power instruction set architecture (ISA). For more information, see Processor compatibility mode definitions.

Ê Power9

This native mode for Power9 processors implements version 3.0 of the IBM Power ISA. For more information, see Processor compatibility mode definitions.

Ê Power10

This native mode for Power10 processors implements version 3.1 of the IBM Power ISA. For more information, see Processor compatibility mode definitions.

Figure 5-2 shows the available processor modes on a Power10 processor-based mid-range server.

Figure 5-2 Processor modes

Processor compatibility mode is important when LPM migration is planned between different generation of servers. An LPAR that might be migrated to a machine that is managed by a processor from another generation must be activated in a specific compatibility mode.

Note: Migrating an LPAR from a Power7 processor-based server to a Power10 processor-based mid-range server by using LPM is not supported; however, the following steps can be completed to accomplish this task:

1. Migrate LPAR from a Power7 processor-based server to a Power8 or Power9 processor-based server by using LPM.

2. Migrate the LPAR from the Power8 or Power9 processor-based server to a Power10 processor-based mid-range server.

The OS running on the Power7 processor-based server must be supported on the Power10 processor-based mid-range server or must be upgraded to a supported level before completing these steps.

5.1.8 Single-root I/O virtualization

Single-root I/O virtualization (SR-IOV) is an extension to the Peripheral Component Interconnect Express (PCIe) specification that allows multiple OSs to simultaneously share a PCIe adapter with little or no runtime involvement from a hypervisor or other virtualization intermediary.

SR-IOV is PCI standard architecture that enables PCIe adapters to become self-virtualizing. It enables adapter consolidation through sharing, much like logical partitioning enables server consolidation. With an adapter capable of SR-IOV, you can assign virtual slices of a single physical adapter to multiple partitions through logical ports, which is done without a VIOS.

5.1.9 More information about virtualization features

The following IBM Redbooks publications provide more information about the virtualization features:

Ê IBM PowerVM Best Practices, SG24-8062

Ê IBM PowerVM Virtualization Introduction and Configuration, SG24-7940

Ê IBM PowerVM Virtualization Managing and Monitoring, SG24-7590

Ê IBM Power Systems SR-IOV: Technical Overview and Introduction, REDP-5065

Non-volatile Memory Express bays- IBM Power E1050

The IBM Power E1050 server has a 10-NVMe backplane that is always present in the server. It offers up to 10 NVMe bays with all four processor sockets populated and six NVMe bays if only two or three processor sockets are populated. It connects to the system board through three Molex Impact connectors and a power connector. There is no cable between the NVMe backplane and the system board.

The wiring strategy and backplane materials are chosen to ensure Gen4 signaling to all NVMe drives. All NVMe connectors are PCIe Gen4 connectors. For more information about the internal connection of the NVMe bays to the processor chips, see Figure 2-13 on page 66.

Each NVMe interface is a Gen4 x4 PCIe bus. The NVMe drives can be in an OS-controlled RAID array. A hardware RAID is not supported on the NVMe drives. The NVMe thermal design supports 18 W for 15-mm NVMe drives and 12 W for 7-mm NVMe drives.

For more information about the available NVMe drives and how to plug the drives for best availability, see 3.5, “Internal storage” on page 92.

2.3.4 Attachment of I/O-drawers

The Power E1050 server can expand the number of its I/O slots by using I/O Expansion Drawers (#EMX0). The number of I/O drawers that can be attached to an Power E1050 server depends on the number of populated processor slots, which changes the number of available internal PCIe slots of the server. Only some slots can be used to attach an I/O Expansion Drawer by using the #EJ2A CXP Converter adapter, also referred to as a cable card.

Feature Code #EJ2A is an IBM designed PCIe Gen4 x16 cable card. It is the only supported cable card to attach fanout modules of an I/O Expansion Drawer in the Power E1050 server. Previous cards from a Power E950 server cannot be used. Feature Code #EJ2A supports copper and optical cables for the attachment of a fanout module.

Note: The IBM e-config configurator adds 3-meter copper cables (Feature Code #ECCS) to the configuration if no cables are manually specified. If you want to have optical cables make sure to configure them.

Table 2-11 lists the PCIe slot order for the attachment of an I/O Expansion Drawer, the maximum number of I/O Expansion Drawers and Fanout modules, and the maximum number of available slots (dependent on the populated processor sockets).

Table 2-11 I/O Expansion Drawer capabilities depend on the number of populated processor slots

For more information about the #EMX0 I/O Expansion Drawer, see 3.9.1, “PCIe Gen3 I/O expansion drawer” on page 99.

2.3.5 System ports

The Power E1050 server has two 1-Gbit Ethernet ports and two USB 2.0 ports to connect to the eBMC service processor. The two eBMC Ethernet ports are used to connect one or two HMCs. There are no other HMC ports, as in servers that have a Flexible Service Processor (FSP). The eBMC USB ports can be used for a firmware update from a USB stick.

The two eBMC Ethernet ports are connected by using four PCIe lanes each, although the eBMC Ethernet controllers need only one lane. The connection is provided by the DCM0, one from each Power10 chip. For more information, see Figure 2-13 on page 66.

The eBMC module with its two eBMC USB ports also is connected to the DCM0 at chip 0 by using a x4 PHB, although the eBMC module uses only one lane.

Chapter 2. Architecture and technical overview                                     71

For more information about attaching an Power E1050 server to an HMC, see Accessing the eBMC so that you can manage the system.

For more information about how to do a firmware update by using the eBMC USB ports, see Installing the server firmware on the service processor or eBMC through a USB port.

Service labels- IBM Power E1050

Service providers use these labels to assist them in performing maintenance actions. Service labels are found in various formats and positions are intended to transmit readily available information to the servicer during the repair process. Here are some of these service labels and their purposes:

Ê Location diagrams: Location diagrams are on the system hardware, relating information regarding the placement of hardware components. Location diagrams might include location codes, drawings of physical locations, concurrent maintenance status, or other data that is pertinent to a repair. Location diagrams are especially useful when multiple components such as DIMMs, processors, fans, adapters, and power supplies are installed.

Ê Remove/replace procedures: Service labels that contain remove/replace procedures are often found on a cover of the system or in other spots accessible to the servicer. These labels provide systematic procedures, including diagrams detailing how to remove or replace certain serviceable hardware components.

Ê Arrows: Numbered arrows are used to indicate the order of operation and the serviceability direction of components. Some serviceable parts such as latches, levers, and touch points must be pulled or pushed in a certain direction and in a certain order for the mechanical mechanisms to engage or disengage. Arrows generally improve the ease of serviceability.

4.5.9 QR labels

QR labels are placed on the system to provide access to key service functions through a mobile device. When the QR label is scanned, it goes to a landing page for Power10 processor-based systems, which contains each machine type and model (MTM) service functions of interest while physically at the server. These functions include things installation and repair instructions, reference code lookup, and other items.

4.5.10 Packaging for service

The following service features are included in the physical packaging of the systems to facilitate service:

Ê Color coding (touch points): Blue-colored touch points delineate touch points on service components where the component can be safely handled for service actions, such as removal or installation.

Ê Tool-less design: Selected IBM systems support tool-less or simple tool designs. These designs require no tools or simple tools such as flathead screw drivers to service the hardware components.

Ê Positive retention: Positive retention mechanisms help to ensure proper connections between hardware components, such as cables to connectors, and between two cards that attach to each other. Without positive retention, hardware components run the risk of becoming loose during shipping or installation, preventing a good electrical connection. Positive retention mechanisms like latches, levers, thumbscrews, pop nylatches (U-clips), and cables are included to help prevent loose connections and aid in installing (seating) parts correctly. These positive retention items do not require tools.

4.5.11 Error handling and reporting

In a system hardware or environmentally induced failure, the system runtime error capture capability systematically analyzes the hardware error signature to determine the cause of failure. The analysis result is stored in system NVRAM. When the system can be successfully restarted either manually or automatically, or if the system continues to operate, the error is reported to the OS. Hardware and software failures are recorded in the system log. When an HMC is attached in the PowerVM environment, an ELA routine analyzes the error, forwards the event to the SFP application running on the HMC, and notifies the system administrator that it has isolated a likely cause of the system problem. The service processor event log also records unrecoverable checkstop conditions, forwards them to the SFP application, and notifies the system administrator.

The system can call home through the OS to report platform-recoverable errors and errors that are associated with PCI adapters or devices.

In the HMC-managed environment, a Call Home service request is initiated from the HMC, and the pertinent failure data with service parts information and part locations is sent to an IBM service organization. Customer contact information and specific system-related data, such as the MTM and serial number, along with error log data that is related to the failure, are sent to IBM Service.

4.5.12 Live Partition Mobility

With PowerVM Live Partition Mobility (LPM), users can migrate an AIX, IBM i, or Linux VM partition running on one IBM Power partition server to another IBM Power server without disrupting services. The migration transfers the entire system environment, including processor state, memory, attached virtual devices, and connected users. It provides continuous OS and application availability during planned partition outages for repair of hardware and firmware faults. The Power10 servers that use Power10 processor-based technology support secure LPM, where the VM image is encrypted and compressed before transfer. Secure LPM uses on-chip encryption and compression capabilities of the Power10 processor for optimal performance.

4.5.13 Call Home

Call Home refers to an automatic or manual call from a client location to the IBM support structure with error log data, server status, or other service-related information. Call Home invokes the service organization in order for the appropriate service action to begin. Call Home can be done through the ESA that is embedded in the HMC, or through a version of the ESA that is embedded in the OSs for non-HMC-managed or a version of ESA that runs as a stand-alone Call Home application. While configuring Call Home is optional, clients are encouraged to implement this feature to obtain service enhancements such as reduced problem determination and faster and potentially more accurate transmittal of error information. In general, using the Call Home feature can result in increased system availability.

4.5.14 IBM Electronic Services

ESA and Client Support Portal (CSP) comprise the IBM Electronic Services solution, which is dedicated to providing fast, exceptional support to IBM clients. ESA is a no-charge tool that proactively monitors and reports hardware events, such as system errors, and collects hardware and software inventory. ESA can help focus on the client’s company business initiatives, save time, and spend less effort managing day-to-day IT maintenance issues. In addition, Call Home Cloud Connect Web and Mobile capability extends the common solution and offers IBM Systems related support information that is applicable to servers and storage.

For more information, see IBM Call Home Connect Cloud.

Serviceability- IBM Power E1050

The purpose of serviceability is to efficiently repair the system while attempting to minimize or eliminate any impact to system operation. Serviceability includes system installation, Miscellaneous Equipment Specification (MES) (system upgrades/downgrades), and system maintenance or repair. Depending on the system and warranty contract, service may be performed by the client, an IBM representative, or an authorized warranty service provider. The serviceability features that are delivered in this system help provide a highly efficient service environment by incorporating the following attributes:

Ê Designed for IBM System Services Representative (IBM SSR) setup, install, and service.

Ê Error Detection and Fault Isolation (ED/FI).

Ê FFDC.

Ê Light path service indicators.

Ê Service and FRU labels that are available on the system.

Ê Service procedures are documented in IBM Documentation or available through the HMC.

Ê Automatic reporting of serviceable events to IBM through the Electronic Service Agent (ESA) Call Home application.

4.5.1 Service environment

In the PowerVM environment, the HMC is a dedicated server that provides functions for configuring and managing servers for either partitioned or full-system partition by using a GUI, command-line interface (CLI), or Representational State Transfer (REST) API. An HMC that is attached to the system enables support personnel (with client authorization) to remotely or locally (by using the physical HMC that is in proximity of the server being serviced) log in to review error logs and perform remote maintenance if required.

The Power10 processor-based servers support several service environments:

Ê Attachment to one or more HMCs or virtual HMCS (vHMCs) is a supported option by the system with PowerVM. This configuration is the default one for servers supporting logical partitions (LPARs) with dedicated or virtual I/O. In this case, all servers have at least one LPAR.

Ê No HMC. There are two service strategies for non-HMC systems:

– Full-system partition with PowerVM: A single partition owns all the server resources and only one operating system (OS) may be installed. The primary service interface is through the OS and the service processor.

– Partitioned system with NovaLink: In this configuration, the system can have more than one partition and can be running more than one OS. The primary service interface is through the service processor.

4.5.2 Service interface

Support personnel can use the service interface to communicate with the service support applications in a server by using an operator console, a GUI on the management console or service processor, or an OS terminal. The service interface helps to deliver a clear, concise view of available service applications, helping the support team to manage system resources and service information in an efficient and effective way. Applications that are available through the service interface are carefully configured and placed to grant service providers access to important service functions. Different service interfaces are used, depending on the state of the system, hypervisor, and operating environment. The primary service interfaces are:

Ê LEDs

Ê Operator panel

Ê BMC Service Processor menu

Ê OS service menu

Ê Service Focal Point (SFP) on the HMC or vHMC with PowerVM

In the light path LED implementation, the system can clearly identify components for replacement by using specific component-level LEDs and also can guide the servicer directly to the component by signaling (turning on solid) the enclosure fault LED and component FRU fault LED. The servicer also can use the identify function to flash the FRU-level LED. When this function is activated, a roll-up to the blue enclosure locate occurs. These enclosure LEDs turn on solid and can be used to follow the light path from the enclosure and down to the specific FRU in the PowerVM environment.

4.5.3 First Failure Data Capture and error data analysis

FFDC is a technique that helps ensure that when a fault is detected in a system, the root cause of the fault is captured without the need to re-create the problem or run any sort of extending tracing or diagnostics program. For most faults, a good FFDC design means that the root cause also can be detected automatically without servicer intervention.

FFDC information, error data analysis, and fault isolation are necessary to implement the advanced serviceability techniques that enable efficient service of the systems and to help determine the failing items.

In the rare absence of FFDC and Error Data Analysis, diagnostics are required to re-create the failure and determine the failing items.

4.5.4 Diagnostics

The general diagnostic objectives are to detect and identify problems so that they can be resolved quickly. Elements of th IBM diagnostics strategy include:

Ê Provides a common error code format equivalent to a system reference code with a PowerVM, system reference number, checkpoint, or firmware error code.

Ê Provides fault detection and problem isolation procedures. Supports remote connection, which can be used by the IBM Remote Support Center or IBM Designated Service.

Ê Provides interactive intelligence within the diagnostics with detailed online failure information while connected to the IBM back-end system.

4.5.5 Automatic diagnostics

The processor and memory FFDC technology is designed to perform without re-creating diagnostics or user intervention. Solid and intermittent errors are designed to be correctly detected and isolated at the time the failure occurs. Runtime and boot-time diagnostics fall into this category.

4.5.6 Stand-alone diagnostics

As the name implies, stand-alone or user-initiated diagnostics requires user intervention. The user must perform manual steps, including:

Ê Booting from the diagnostics CD, DVD, Universal Serial Bus (USB), or network

Ê Interactively selecting steps from a list of choices

4.5.7 Concurrent maintenance

The determination of whether a firmware release can be updated concurrently is identified in the readme file that is released with the firmware. An HMC is required for a concurrent firmware update with PowerVM. In addition, concurrent maintenance of PCIe adapters and NVMe drives is supported by PowerVM. Power supplies, fans, and operating panel LCDs are hot-pluggable.

Power10 processor RAS- IBM Power E1050

Although there are many differences internally in the Power10 processor compared to the Power9 processor that relate to performance, number of cores, and other features, the general RAS philosophy for how errors are handled has remains the same. Therefore, information about Power9 processor-based subsystem RAS can still be referenced to understand the design. For more information, see Introduction to IBM Power Reliability, Availability, and Serviceability for Power9 processor-based systems using IBM PowerVM.

The Power E1050 processor module is a dual-chip module (DCM) that differs from that of the Power E950, which has single-chip module (SCM). Each DCM has 30 processor cores, which is 120 cores for a 4-socket (4S) Power E1050. In comparison, a 4S Power E950 supports 48 cores. The internal processor buses are twice as fast with the Power E1050 running at 32 Gbps.

Despite the increased cores and the faster high-speed processor bus interfaces, the RAS capabilities are equivalent, with features like PIR, L2/L3 Cache ECC protection with cache line delete, and the CRC fabric bus retry that is a characteristic of Power9 and Power10 processors. As with the Power E950, when an internal fabric bus lane encounters a hard failure in a Power E1050, the lane can be dynamically spared out.

Figure 4-2 shows the Power10 DCM

Figure 4-2 Power10 Dual-Chip Module

4.3.1 Cache availability

The L2/L3 caches in the Power10 processor in the memory buffer chip are protected with double-bit detect, single-bit correct ECC. In addition, a threshold of correctable errors that are detected on cache lines can result in the data in the cache lines being purged and the cache lines removed from further operation without requiring a restart in the PowerVM environment. Modified data is handled through Special Uncorrectable Error (SUE) handling. L1 data and instruction caches also have a retry capability for intermittent errors and a cache set delete mechanism for handling solid failures.

4.3.2 Special Uncorrectable Error handling

SUE handling prevents an uncorrectable error in memory or cache from immediately causing the system to terminate. Rather, the system tags the data and determines whether it will ever be used again. If the error is irrelevant, it will not force a checkstop. When and if data is used, I/O adapters that are controlled by an I/O hub controller freeze if the data were transferred to an I/O device; otherwise, termination can be limited to the program/kernel, or if the data is not owned by the hypervisor.

4.3.3 Uncorrectable error recovery

When the auto-restart option is enabled, the system can automatically restart following an unrecoverable software error, hardware failure, or environmentally induced (AC power) failure.

4.4 I/O subsystem RAS

The Power E1050 provides 11 general-purpose Peripheral Component Interconnect Express (PCIe) slots that allow for hot-plugging of I/O adapters, which makes the adapters concurrently maintainable. These PCIe slots operate at Gen4 and Gen5 speeds. Some of the PCIe slots support OpenCAPI and I/O expansion drawer cable cards.

Unlike the Power E950, the Power E1050 location codes start from index 0, as with all Power 10 systems. However, slot c0 is not a general-purpose PCIe slot because it is reserved for the eBMC Service Processor card.

Another difference between the Power E950 and the Power E1050 is that all the Power E1050 slots are directly connected to a Power10 processor. In the Power E950, some slots are connected to the Power9 processor through I/O switches.

All 11 PCIe slots are available if 3-socket or 4-socket DCMs are populated. In the 2-socket DCM configuration, only seven PCIe slots are functional.

DASD options

The Power E1050 provides 10 internal Non-volatile Memory Express (NVMe) drives at Gen4 speeds, which means that they are concurrently maintainable. The NVMe drives are connected to DCM0 and DCM3. In a 2-socket DCM configuration, only six of the drives are available. To access to all 10 internal NVMe drives, you must have a 4S DCM configuration. Unlike the Power E950, the Power E1050 has no internal serial-attached SCSI (SAS) drives You can use an external drawer to provide SAS drives.

The internal NVMe drives support OS-controlled RAID 0 and RAID 1 array, but no hardware RAID. For best redundancy, use an OS mirror and dual Virtual I/O Server (VIOS) mirror. To ensure as much separation as possible in the hardware path between mirror pairs, the following NVMe configuration is recommended:

Ê Mirrored OS: NVMe3 and NVMe4 pairs, or NVMe8 and NVMe9 pairs

Ê Mirrored dual VIOS:

– Dual VIOS: NVMe3 for VIOS1, NVMe4 for VIOS2.

– Mirrored dual VIOS: NVMe9 mirrors NVMe3, and NVMe8 mirrors NVMe4.

IBM storage- IBM Power E1050

The IBM System Storage Disk Systems products and offerings provide compelling storage solutions with superior value for all levels of business, from entry-level to high-end storage systems. IBM Storage simplifies data infrastructure by using an underlying software foundation to strengthen and streamline the storage in the hybrid cloud environment, which uses a simplified approach to containerization, management, and data protection.

For more information about the various offerings, see Data Storage Solutions.

The following sections highlight a few of these offerings.

IBM Elastic Storage System

IBM Elastic Storage® System is a modern implementation of software-defined storage (SDS). The IBM Elastic Storage System 3200 and IBM Elastic Storage System 5000 make it easier for you to deploy fast, highly scalable storage for artificial intelligence (AI) and big data. With the low latency and high-performance NVMe storage technology and 8Y global file system and global data services of IBM Spectrum® Scale, the IBM Elastic Storage System 3200 and 5000 nodes can grow to over yottabyte (YB) configurations and be integrated into a federated global storage system. For more information, see IBM Elastic Storage System.

IBM FlashSystem: Flash data storage

The IBM FlashSystem® family is a portfolio of cloud-enabled storage systems that can be easily deployed and quickly scaled to help optimize storage configurations, streamline issue resolution, and reduce storage costs. IBM FlashSystem is built with IBM Spectrum Virtualize software to help deploy sophisticated hybrid cloud storage solutions, accelerate infrastructure modernization, address security needs, and maximize value by using the power of AI. The new IBM FlashSystem models provide enterprise-grade functions and deliver the performance to facilitate cybersecurity without compromising production workloads. They offer the advantages of end-to-end NVMe, the innovation of IBM FlashCore® technology, and single-chip module (SCM) for ultra-low latency. For more information, see IBM FlashSystem.

IBM DS8000 storage system

IBM DS8900F is the next generation of enterprise data systems that are built with the most advanced Power processor-based technology and feature ultra-low application response times. Designed for data-intensive and mission-critical workloads, DS8900F adds next-level performance, data protection, resiliency, and availability across hybrid cloud solutions. This outcome is made possible through ultra-low latency, better than seven 9s (99.99999) availability, transparent cloud, tiering, and advanced data protection against malware and ransomware. Additionally, this enterprise class storage solution provides superior performance and higher capacity, which enables the consolidation of all mission-critical workloads in one place. IBM DS8900F can provide 100% data encryption at-rest, in-flight, and in the cloud. For more information, see IBM DS8000 Storage system.

IBM Tape Solutions

A simple and inexpensive data resilience solution virtually impervious to cyberattacks exists from one of the oldest technologies in the data center: tape. The solution requires users to simply remove the tapes storing their data from their networks and stack the tapes on the nearest shelf. This air gap that is created between the data and troublemakers provides a complete cyber-resilient defense that effectively prevents penetration by hackers. Air gaps are just one of several types of data protection that tape can offer. IBM tape-based data storage solutions provide many data protection features, including data encryption and compression, cloud-based disaster recovery, key management, and write once, read many (WORM) technology. For more information, see IBM Tape Solutions.

IBM SAN Volume Controller

IBM SAN Volume Controller (SVC) is an enterprise-class system that consolidates storage from over 500 IBM and third-party storage systems to improve efficiency, simplify management and operations, modernize storage with new capabilities, and enable a common approach to hybrid cloud regardless of storage system type. IBM SVC provides a complete set of data resilience capabilities with high availability (HA), business continuance, and data security features. By helping effectively maximize the economics of massive volumes of data, the IBM SVC helps improve data value, increase data security, enhance data simplicity, and promote 100% availability with IBM HyperSwap®. The IBM SVC also provides IBM Easy Tier® AI-driven automated tiering to improve performance at a lower cost. For more information, see IBM SVC.