You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be nice if you could specify a regex pattern for layer names to ignore from target modules.
Motivation
I have a base model that I apply lora to, but then I also add a few extra attention layers at the end of the model. I don't want lora to target these new decoder layers.
Your contribution
I could work on a PR if there is interest.
The text was updated successfully, but these errors were encountered:
I'm not quite sure what you mean. On the one hand, you mention targeting LoRA layers, which implies you want to exclude certain layers from target_modules. This is already possible as target_modules accepts a regex, which is flexible enough to cover most use cases. As an example, for the OPT model, if I pass LoraConfig(target_modules=r".*\.(?!5\b)\d+\.self_attn\.k_proj"), I'll match all k_proj layers except for layer 5.
On the other hand, you mention modules_to_save in the title. Note that this is not related to applying LoRA, the modules listed there are fully fine-tuned and saved in the checkpoint file. This should not automatically include your extra attention layers.
Feature request
It would be nice if you could specify a regex pattern for layer names to ignore from target modules.
Motivation
I have a base model that I apply lora to, but then I also add a few extra attention layers at the end of the model. I don't want lora to target these new decoder layers.
Your contribution
I could work on a PR if there is interest.
The text was updated successfully, but these errors were encountered: