Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapters + LLama -- re-design. #2526

Open
TParcollet opened this issue Apr 26, 2024 · 6 comments · May be fixed by #2534
Open

Adapters + LLama -- re-design. #2526

TParcollet opened this issue Apr 26, 2024 · 6 comments · May be fixed by #2534
Labels
enhancement New feature or request important invalid This doesn't seem right

Comments

@TParcollet
Copy link
Collaborator

Describe the bug

It's not a bug, just a discussion. I think that people including @Adel-Moumen @poonehmousavi @mravanelli and maybe @pplantinga @asumagic may want to engage.

Adapters, or, more generally, altering an existing pre-trained model (you can see it as an object originating from the Pretrainer or checkpointer) is something becoming more and more common. Due to this, we, imho, must define a proper design in SpeechBrain to do so. Recently, I implemented LoRA and Houlsby for our Transformer, on my side. But I also realised that @poonehmousavi did some work for Llama here. I don't think we are doing this correctly. The code in Llama 2, for instance, might be hard to understand, and some functions (like the one replacing a module in an existing module) should be generalised and considered a general SpeechBrain util. My strategy would be to create an Adapters.py in lobes where we could put everything relating to them, instead of having them randomly appearing in lobes files.

What do you folks think?

Expected behaviour

Respect the Zen of SpeechBrain.

To Reproduce

No response

Environment Details

No response

Relevant Log Output

No response

Additional Context

No response

@TParcollet TParcollet added enhancement New feature or request invalid This doesn't seem right important labels Apr 26, 2024
@poonehmousavi
Copy link
Collaborator

poonehmousavi commented Apr 26, 2024

I totally agree.. what we have done for LLama2 is the QLora(Qunatization+Lora) and only applicable for LLama2. I think with this trend of public LLM released so often, and all of them need Lora for fine-tuning, we need to have an unified class that could handle Lora(and Qlora) for different models. In HF, sft_trainer.py could handle this efficient fine-tuning. We also need to have an easy interface in SB for efficient fine-tuning. It should also work well with DDP.

@mravanelli
Copy link
Collaborator

I think it is very important too. We can support at least LoRA and adapters. I'm not sure what could be the best way to go to support it elegantly. One idea could be to implement a wrapper (e.g., Adapter) to apply to our models, which can plug the necessary new modules. However, it seems to me quite hard to create something easy to use and flexible at the same time. Any idea from the community?

@pplantinga
Copy link
Collaborator

I have used PEFT to apply Lora to an existing model and it was pretty straightforward to use. You just pass the model and it automatically replaces all relevant layers with Lora layers. We could do something similar

@pplantinga
Copy link
Collaborator

We could even import and use peft if the dependencies are light

@Adel-Moumen
Copy link
Collaborator

Adel-Moumen commented Apr 27, 2024

Hi @TParcollet, I agree with your strategy.

I have used PEFT to apply Lora to an existing model and it was pretty straightforward to use. You just pass the model and it automatically replaces all relevant layers with Lora layers. We could do something similar

I don't think that PEFT library is necessary since the theory/implementation is quite "easy" to reproduce. For instance, ESPNet has their own impl of LORA etc (https://github.com/espnet/espnet/blob/47de29c21f5a7db22089717e92add5e9604fcd48/espnet2/layers/create_adapter_fn.py#L224). We should follow the same strategy and provide our own adapters because many researchers may want to develop their own design/modify code etc which may be harder if we have speechbrain and peft.

Note: we could have our own implementation and also provide a PEFT compatible wrapper. But I don't know if this make sense/necessary.

@TParcollet
Copy link
Collaborator Author

TParcollet commented Apr 27, 2024

Alright, I think that there are two problems here.

  1. Building a wrapper that will naively replace Linear layers with LoRA / Houlsby and others is simple, I did it, and it's not that hard. However, this is only valid for basic research purposes, and making it slightly more evolved e.g. only specifying some linear layers etc, may be a bit harder.
  2. Building something that is usable in practice, with quantisation, fast implementation etc may be MUCH more complicated. I mean, the literature on adapters for LLM is absurdly massive, and supporting this is impossible. Here, I'd say that supporting an extra lib could make sense, but it would need to be something very stable, and reliable AND that covers more than what we can do in 1. Otherwise, there is no point losing control over the code.

I am happy to do a PR for 1., but it will remain shallow IMHO.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request important invalid This doesn't seem right
Projects
None yet
5 participants