Connect with us

Innovation and Technology

Kioxia and Pliops Unveil Next-Gen Storage Innovations at NVIDIA GTC 2025

Published

on

Kioxia and Pliops Unveil Next-Gen Storage Innovations at NVIDIA GTC 2025

As artificial intelligence (AI) continues to evolve, data centers require increasingly sophisticated storage solutions to balance performance, cost, and efficiency. High-speed solid-state drives (SSDs) play a crucial role in AI workloads by enabling rapid data access, particularly for training and inference tasks that rely on GPUs.

At NVIDIA GTC 2025, Kioxia and Pliops have announced groundbreaking storage solutions aimed at optimizing AI performance, enhancing storage density, and reducing reliance on traditional HDDs.

Kioxia’s Next-Generation QLC SSDs

In recent months, major SSD manufacturers have been introducing high-capacity quad-level cell (QLC) SSDs, positioning them as a viable alternative to traditional HDDs for secondary storage. Kioxia has now entered the arena with its latest high-density SSD innovation.

Kioxia LC9 Series NVMe SSD: High-Capacity Storage for AI

Kioxia has unveiled the LC9 Series NVMe SSD, an impressive 122.88TB storage solution tailored for AI applications. This SSD features a 2.5-inch form factor and leverages Kioxia’s 8th-generation 3D QLC NAND technology, built using CMOS Directly Bonded to Array (CBA) to maximize density.

Key specifications include:

  • PCIe 5.0 interface for ultra-fast data transfer
  • Dual-port capability for enhanced fault tolerance and multi-system connectivity
  • Up to 128 gigatransfers per second, ensuring seamless AI model training and inference

AI-Optimized SSDs for Large Language Models

Kioxia emphasizes that high-capacity SSDs are essential for AI workloads, particularly for:

  • Large language model (LLM) training
  • Storing and retrieving extensive datasets
  • Enhancing inference performance and fine-tuning models

Additionally, this SSD is optimized for use with Kioxia’s newly introduced AiSAQ technology, which enhances Retrieval Augmented Generation (RAG) performance. By storing vector database elements directly on SSDs instead of costly DRAM, AiSAQ reduces memory expenses while maintaining high-speed retrieval.

Pliops’ Strategic Collaboration with LMCache Lab

Pliops, a leader in solid-state storage and acceleration technologies, has announced a strategic partnership with the vLLM Production Stack developed at the LMCache Lab at the University of Chicago. This collaboration aims to dramatically enhance LLM inference performance by optimizing shared storage and cache offloading.

Key Highlights of the Collaboration

  • Pliops provides disaggregated smart storage to enhance vLLM execution.
  • The combined solution enables efficient offloading of vLLM cache, ensuring scalability and fault tolerance.
  • It introduces a petabyte-tier memory layer beneath HBM memory, improving GPU compute efficiency for AI applications.
  • Computed key-value (KV) caches are retained and retrieved efficiently, significantly speeding up LLM inference.

Conclusion

As AI adoption accelerates, data centers require robust storage and memory solutions to support the growing demands of model training and inference. Kioxia’s high-capacity PCIe 5.0 SSD and Pliops’ innovative storage acceleration represent key advancements in AI infrastructure. These technologies will be showcased at NVIDIA GTC 2025, highlighting how next-gen storage is shaping the future of AI.

FAQs

What is the capacity of Kioxia’s LC9 Series NVMe SSD?

  • 122.88TB

What interface does Kioxia’s LC9 Series NVMe SSD use?

  • PCIe 5.0

What is the purpose of Pliops’ collaboration with LMCache Lab?

  • To enhance LLM inference performance by optimizing shared storage and cache efficiency.

What is the capacity of the petabyte-tier memory introduced by Pliops and LMCache Lab?

  • Petabyte-scale memory for GPU compute applications.
Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending