Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About using multiple GPUs to do lisa fine-tuning #774

Open
orderer0001 opened this issue Apr 19, 2024 · 4 comments
Open

About using multiple GPUs to do lisa fine-tuning #774

orderer0001 opened this issue Apr 19, 2024 · 4 comments

Comments

@orderer0001
Copy link

If there are multiple GPUs, using the lisa method is also a direct script./scripts/run_finetune_with_lisa.sh? Do I need to set multi-GPU parameters?

@research4pan
Copy link
Contributor

Thanks for your interest in LMFlow! Currently we are working on the full multi-GPU support for LISA. Model parallelism is not integrated yet. If you run the script directly, it is data parallelism and may require more memory than its single GPU version.

Please stay tuned for our latest update, thanks for your understanding 🙏

@orderer0001
Copy link
Author

Can other training methods be configured with multiple GPUs? Do I need to set parameters manually?

@orderer0001
Copy link
Author

When will Lisa training’s support for multiple GPUs be updated?

@research4pan
Copy link
Contributor

Can other training methods be configured with multiple GPUs? Do I need to set parameters manually?

Yes. You may use ./scripts/run_finetune.sh, that script support model parallelism by utilizing deepspeed zero3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants