Distributed and Parallel Training Tutorials
Distributed training is a model training paradigm that involves
spreading training workload across multiple worker nodes, therefore
significantly improving the speed of training and model accuracy. While
distributed training can be used for any type of ML model training, it
is most beneficial to use it for large models and compute demanding
tasks as deep learning.
There are a few ways you can perform distributed training in
PyTorch with each method having their advantages in certain use cases:
Read more about these options in Distributed Overview.
DDP Intro Video Tutorials
A step-by-step video series on how to get started with
DistributedDataParallel and advance to more complex topics
Getting Started with Distributed Data Parallel
This tutorial provides a short and gentle intro to the PyTorch
Distributed Training with Uneven Inputs Using
the Join Context Manager
This tutorial describes the Join context manager and
demonstrates it’s use with DistributedData Parallel.
Getting Started with FSDP
This tutorial demonstrates how you can perform distributed training
with FSDP on a MNIST dataset.
In this tutorial, you will learn how to fine-tune a HuggingFace (HF) T5
model with FSDP for text summarization.
Getting Started with Distributed RPC Framework
This tutorial demonstrates how to get started with RPC-based distributed
Implementing a Parameter Server Using Distributed RPC Framework
This tutorial walks you through a simple example of implementing a
parameter server using PyTorch’s Distributed RPC framework.
Implementing Batch RPC Processing Using Asynchronous Executions
In this tutorial you will build batch-processing RPC applications
with the @rpc.functions.async_execution decorator.
Combining Distributed DataParallel with Distributed RPC Framework
In this tutorial you will learn how to combine distributed data
parallelism with distributed model parallelism.
Customize Process Group Backends Using Cpp Extensions
In this tutorial you will learn to implement a custom ProcessGroup
backend and plug that into PyTorch distributed package using