SK hynix is pushing hard on AI memory bandwidth and capacity with 12-layer HBM4 (2,048 I/O, ~40% better power efficiency) and HBM3E paired with NVIDIA’s GB300 Grace Blackwell GPUs, plus dense DDR5 MRDIMMs and RDIMMs aimed squarely at next-gen servers.
On the storage side, they’re scaling QLC and 4D NAND up to 245 TB per drive over PCIe 4.0/5.0 and even SATA, while experimenting with computational storage (OASIS, optimizer-offload SSDs) to keep GPUs fed instead of stalled on I/O.
Their CXL-based pooled and heterogeneous memory demos, including memory-attached accelerators tied into Meta’s Faiss and SK Telecom’s AI cloud, show a roadmap where disaggregated, memory-centric infrastructure becomes a core design point for large-scale AI data centers, making this link worth a close read for anyone planning future GPU and storage architectures.
Source: SK Hynix Showcases Advanced AI Memory From HBM4 to Next-Gen Storage at SC25 | TechPowerUp