You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the README for the distributed optimizer, it is mentioned that when using bf16 training, a combination of bf16 model parameters and fp32 model grads is employed, and the distributed optimizer's fp32 main gradients are the same as the model's fp32 gradients. However, I am aware that in PyTorch, after the forward and backward passes, the gradients after forward+backward typically match the data type of the parameters. So, there should be always bf16 model grads given bf16 mdoel params, and this is apparently true in the case of fp16 training where an extra copy of fp32 main grads in the optimizer is necessary.
Could you please explain how it is possible to have bf16 parameters with fp32 gradients in the context of bf16 training? I am wondering why is there a difference between fp16 and bf16 training.
The text was updated successfully, but these errors were encountered:
In the README for the distributed optimizer, it is mentioned that when using bf16 training, a combination of bf16 model parameters and fp32 model grads is employed, and the distributed optimizer's fp32 main gradients are the same as the model's fp32 gradients. However, I am aware that in PyTorch, after the forward and backward passes, the gradients after forward+backward typically match the data type of the parameters. So, there should be always bf16 model grads given bf16 mdoel params, and this is apparently true in the case of fp16 training where an extra copy of fp32 main grads in the optimizer is necessary.
Could you please explain how it is possible to have bf16 parameters with fp32 gradients in the context of bf16 training? I am wondering why is there a difference between fp16 and bf16 training.
The text was updated successfully, but these errors were encountered: