Finetuning with adaptors #3136
-
I am currently trying to finetune different models on different datasets, but I am not yet sure how much training is needed. What I intend to do is to train one epoch on all data and then check the results based on the fine-tuning adapter created. Now, if I want to keep fine-tuning the model, would it be enough just to select the adapter from the dropdown menu, set my training parameters, and restart the fine-tune from where it stopped previously? And would the new adapter contain all the "knowledge" from both sessions of fine-tuning? I am asking because I see that the adapter is getting loaded, but I am not sure how will the new adapter get saved (what knowledge will be saved). |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
When you select an adapter from the dropdown menu and fine-tune it on a new dataset, there would be two situations: If you selected |
Beta Was this translation helpful? Give feedback.
When you select an adapter from the dropdown menu and fine-tune it on a new dataset, there would be two situations: If you selected
create_new_adapter
in the lora tab, the newly trained adapter does not contain the knowledge of the previous dataset, you need to specify both the adapters to infer the model. Otherwise, the newly trained adapter contains all the knowledge in the datasets, you just need to use the last adapter to infer the model.