Kioxia Showcases RAID Offload Solution for NVMe Drives

At FMS 2024, Kioxia showcased a proof-of-concept demonstration of their proposed new RAID offload methodology for enterprise SSDs. The reason for this development is quite evident: as SSDs continue to become faster with each generation, RAID arrays face significant challenges in maintaining and scaling up performance. Even when RAID operations are managed by a dedicated RAID card, a simple write request in a RAID 5 array, for instance, involves two reads and two writes to different drives. In scenarios where there is no hardware acceleration, the data from the reads needs to travel all the way back to the CPU and main memory for further processing before the writes can be performed.

Kioxia has proposed utilizing the PCIe direct memory access feature along with the SSD controller’s memory buffer (CMB) to prevent the data from having to travel up to the CPU and back. The necessary parity computation is carried out by an accelerator block within the SSD controller.

In Kioxia’s proof-of-concept implementation, the DMA engine can access the entire host address space (including the peer SSD’s BAR-mapped CMB), allowing it to receive and transfer data as needed from neighboring SSDs on the bus. Kioxia reported that their offload PoC showed nearly a 50% reduction in CPU utilization and more than a 90% reduction in system DRAM utilization compared to software RAID executed on the CPU. The proposed offload scheme can also handle scrubbing operations without consuming host CPU cycles for the parity computation task.

Kioxia has already taken steps to contribute these features to the NVM Express working group. If accepted, the proposed offload scheme will become part of a standard that could be widely adopted by multiple SSD vendors.

Scroll to Top