SC20 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Kraken: Memory-Efficient Continual Learning for Large-Scale Real-Time Recommendation

Authors: Minhui Xie (Tsinghua University, China; Kuaishou Technology); Kai Ren (Kuaishou Technology); Youyou Lu (Tsinghua University, China); Guangxu Yang, Qingxing Xu, and Bihai Wu (Kuaishou Technology); Jiazhen Lin (Tsinghua University, China); Hongbo Ao and Wanhong Xu (Kuaishou Technology); and Jiwu Shu (Tsinghua University, China)

Abstract: Modern recommendation systems in industry often use deep learning (DL) models that achieve better model accuracy with more data and model parameters. Current open-source DL frameworks, however, such as TensorFlow and PyTorch, show relatively low scalability on training recommendation models with terabytes of parameters. To efficiently learn large-scale recommendation models from data streams that generate hundreds of terabytes training data daily, we introduce a continual learning system called Kraken. Kraken contains a special parameter server implementation that dynamically adapts to the rapidly changing set of sparse features for the continuous training and serving of recommendation models. Kraken provides a sparsity-aware training system that uses different learning optimizers for dense and sparse parameters to reduce memory overhead. Extensive experiments using real-world datasets confirm the effectiveness and scalability of Kraken. Kraken can benefit the accuracy of recommendation tasks with the same memory resources, or trisect the memory usage, while keeping model performance.

Back to Technical Papers Archive Listing