Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can it support LoRa fine-tuning of the CPU? After all, idle CPU resources are also a waste. #380

Open
MRQJsfhf opened this issue Apr 25, 2024 · 4 comments

Comments

@MRQJsfhf
Copy link

No description provided.

@erwe324
Copy link

erwe324 commented Apr 25, 2024

I don't know but I guess fine-tuning only on the CPU would be very slow.

@danielhanchen
Copy link
Contributor

Ye CPU can be slow - but on that note Unsloth's long context support moves stuff to RAM, so we in theory do use the CPU / RAM somewhat

@MRQJsfhf
Copy link
Author

This is indeed slow, but I can use my rest time and leisure time to train.

CPU 可能会很慢 - 但在这一点上,Unsloth 的长上下文支持将内容移动到 RAM,所以理论上我们确实会使用 CPU / RAM

@danielhanchen
Copy link
Contributor

@MRQJsfhf Hmm interesting I'll see what I can do!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants