Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chatglm3 单卡训练报错了 #131

Open
eanfs opened this issue Jan 21, 2024 · 4 comments
Open

chatglm3 单卡训练报错了 #131

eanfs opened this issue Jan 21, 2024 · 4 comments

Comments

@eanfs
Copy link

eanfs commented Jan 21, 2024

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 14.56 GiB total capacity; 12.31 GiB already allocated; 486.50 MiB free; 13.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@eanfs
Copy link
Author

eanfs commented Jan 21, 2024

卡是T4

@liucongg
Copy link
Owner

显存不足,建议换卡或者采用qlora当时微调

@eanfs
Copy link
Author

eanfs commented Feb 5, 2024

显存不足,建议换卡或者采用qlora当时微调

只有这个卡了,
能说下采用qlora微调 的方案吗

@sevenandseven
Copy link

显存不足,建议换卡或者采用qlora当时微调

只有这个卡了, 能说下采用qlora微调 的方案吗

你好,想请问下,怎么使用qlora做chatglm3-6b的微调,有代码可以分享吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants