DeepSpeed-FastGen: MIIとDeepSpeed-InferenceによるLLMのための高速なテキスト生成 Permalink
title: “ZeRO-Inference: 20X faster inference through weight quantization and KV cache offloading” excerpt: “” link: https://github.com/microsoft/DeepSpeedExa...
Partition-aware ZeRO with up to 2x reduction in communication time!
DeepSpeed was used to train the world’s largest language model.