A Case Against CXL Memory Pooling
Philip Levis, Kun Lin, and Amy Tai
Published in Proceedings of HotNets 2023: Twenty-Second ACM Workshop on Hot Topics in Networks, November 2023.
Abstract
Compute Express Link (CXL) is a replacement for PCIe. With much lower latency than PCIe and hardware support for cache coherence, programs can efficiently access remote memory over CXL. These capabilities have opened the possibility of CXL memory pools in datacenter and cloud networks, consisting of a large pool of memory that multiple machines share. Recent work argues memory pools could reduce memory needs and datacenter costs. In this paper, we argue that three problems preclude CXL memory pools from being useful or promising: cost, complexity, and utility. The cost of a CXL pool will outweigh any savings from reducing RAM. CXL has substantially higher latency than main memory, enough so that using it will require substantial rewriting of network applications in complex ways. Finally, from analyzing two production traces from Google and Azure Cloud, we find that modern servers are large relative to most VMs; even simple VM packing algorithms strand little memory, undermining the main incentive behind pooling. Despite recent research interest, as long as these three properties hold, CXL memory pools are unlikely to be a useful technology for datacenter or cloud systems.
Paper (315KB)
BibTeX entry
@inproceedings{hotnets23-cxl, author = "Philip Levis and Kun Lin and Amy Tai", title = "{A Case Against CXL Memory Pooling}", booktitle = "{Proceedings of HotNets 2023: Twenty-Second ACM Workshop on Hot Topics in Networks}", year = {2023}, month = {November} }





Login