You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @yanz0920 During Adaround optimization, we try to put all the cached intermediate activation data for a given layer on GPU for faster optimization whenever possible. In your case, you could disable this optimization by patching AdaroundOptimizer.enable_caching_acts_data method as shown in this unit test.
What to do when the model is too large to use adaround?
For example, when the model has 6B parameters and dtype is torch.float32, the storage requirements are as follows:
model: 24G
quantsim_model:24G
But there will be OOM when I runing AdaRound on Nvidia A100, which has 80G cuda memory...
The text was updated successfully, but these errors were encountered: