10x bigger model training on a single GPU with ZeRO-Offload

We introduce a new technology called ZeRO-Offload to enable 10X bigger model training on a single GPU. ZeRO-Offload extends ZeRO-2 to leverage both CPU and GPU memory for training large models. Using a machine with a single GPU, our users now can run models of up to 13 billion parameters without running out of memory, 10x bigger than the existing approaches, while obtaining competitive throughput. This feature democratizes multi-billion-parameter model training and opens the window for many deep learning practitioners to explore bigger and better models.

Updated: