梯度|北大校友“炼丹”分享:OpenAI如何训练千亿级模型?( 六 )


内存高效优化器
优化器也会消耗内存。以主流的Adam优化器为例,其内部需要维护动量和方差,这两者与梯度和模型参数比例基本相同。这意味着,我们需要节省4倍模型权重的内存。
为了减少内存消耗,学术界已经提出了几款主流优化器。与Adam相比,Adafactor(Shazeer et al.2018)优化器没有存储全部动量和变化,只跟踪移动平均数的每行和每列总和,然后根据这些总和估计二阶矩。
SM3(Anil et al.2019)优化器采用了一种不同的自适应优化方法。
ZeRO(Rajbhandari et al.2019)零冗余优化器节省了大型模型训练在两方面的内存消耗:

  • 大多数内存由模型状态消耗,包括优化器状态(例如Adam动量和方差)、梯度和参数。混合精度训练也需要大量内存,因为除了FP16版本之外,优化器还需要保存FP32参数和其他优化器状态的副本。
  • 未被激活、临时缓冲区以及不可用的碎片内存消耗(论文中称为剩余状态)。
ZeRO结合了ZeRO-DP和ZeRO-R两种方法。ZeRO-DP是一种增强的数据并行,避免了模型状态的简单冗余。它以动态的方式跨多个并行数据划分优化器状态、梯度和参数,以最小化通信量。ZeRO-R使用分区激活二次计算、恒定缓冲区大小和动态内存碎片,以优化剩余状态的内存消耗。

参考资料:
[1] Li et al. “PyTorch Distributed: Experiences on Accelerating Data Parallel Training” VLDB 2020.
[2] Cui et al. “GeePS: Scalable deep learning on distributed GPUs with a GPU-specialized parameter server” EuroSys 2016
[3] Shoeybi et al. “Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism.” arXiv preprint arXiv:1909.08053 (2019).
[4] Narayanan et al. “Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM.” arXiv preprint arXiv:2104.04473 (2021).
[5] Huang et al. “GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism.” arXiv preprint arXiv:1811.06965 (2018).
[6] Narayanan et al. “PipeDream: Generalized Pipeline Parallelism for DNN Training.” SOSP 2019.
[7] Narayanan et al. “Memory-Efficient Pipeline-Parallel DNN Training.” ICML 2021.
[8] Shazeer et al. “The Sparsely-Gated Mixture-of-Experts Layer Noam.” arXiv preprint arXiv:1701.06538 (2017).
[9] Lepikhin et al. “GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding.” arXiv preprint arXiv:2006.16668 (2020).
[10] Fedus et al. “Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity.” arXiv preprint arXiv:2101.03961 (2021).
[11] Narang & Micikevicius, et al. “Mixed precision training.” ICLR 2018.
[12] Chen et al. 2016 “Training Deep Nets with Sublinear Memory Cost.” arXiv preprint arXiv:1604.06174 (2016).
[13] Jain et al. “Gist: Efficient data encoding for deep neural network training.” ISCA 2018.
[14] Shazeer & Stern. “Adafactor: Adaptive learning rates with sublinear memory cost.” arXiv preprint arXiv:1804.04235 (2018).
[15] Anil et al. “Memory-Efficient Adaptive Optimization.” arXiv preprint arXiv:1901.11150 (2019).
[16] Rajbhandari et al. “ZeRO: Memory Optimization Towards Training A Trillion Parameter Models Samyam.” arXiv preprint arXiv:1910.02054 (2019).

编译链接:https://lilianweng.github.io/lil-log/2021/09/24/train-large-neural-networks.html
梯度|北大校友“炼丹”分享:OpenAI如何训练千亿级模型?
文章插图

雷锋网雷锋网雷锋网