New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade Peft version to 0.10.0 for LLM finetune #10886
Conversation
Will upgrade peft version in other examples in other pull request. |
Revert changes in HF-PEFT finetune.py . |
Note that this PR will remove support for Peft 0.5.0. |
|
||
self.fan_in_fan_out = fan_in_fan_out | ||
self._active_adapter = adapter_name | ||
self.update_layer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that we don't need to address case of qa_lora=True
additionally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Peft doesn't add qalora support yet. huggingface/peft#986 We still need to handle qa_lora with our code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
1. Why the change?
2. User API changes
3. Summary of the change
4. How to test?
5. New dependencies