Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Minimum required GPU RAM for different architectures #1948

Open
NightMachinery opened this issue Sep 11, 2023 · 1 comment
Open
Labels
enhancement New feature or request

Comments

@NightMachinery
Copy link

NightMachinery commented Sep 11, 2023

Is your feature request related to a problem? Please describe.
Is the minimum required GPU memory for different architectures documented anywhere?

E.g., I want to know what GPU(s) I need to rent to be able to do a backward pass on ViT-g/14.

Describe the solution you'd like
If not, it would be very helpful to add a sheet that documents this data for:

  • Batch Size: 1
    • forward-pass with no_grad
    • forward-pass and backward-pass
  • Batch Size: 10
  • Batch Size: 100

I am not familiar with distributed inference/training; will the amount of GPU RAM needed be linearly divided when using multiple GPUs? Is the overhead of using multiple GPUs different for different models?

Describe alternatives you've considered
The alternative is testing these manually, but this option is expensive as one needs to first get access to a big GPU. It also wastes everyone's time as everybody needs to do this by themselves.

@NightMachinery NightMachinery added the enhancement New feature or request label Sep 11, 2023
@rwightman
Copy link
Collaborator

rwightman commented Sep 11, 2023

The batch size in inference and train results tables is the min max batch size (with some reasonable step granularity) they run at (from a 1024 or 512 starting point for inf, train respectively). Doing those runs is rather incredibly long and time consuming so it's not run often.

Measuring actual GPU use is not particularly reliable, there is so much variability due to the way the allocation and kernel benchmarking works, you really have to try the batch size and see it succeed or fail to know if it works..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants