Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong LoRA Weights? #89

Open
Pompey21 opened this issue May 11, 2024 · 2 comments
Open

Wrong LoRA Weights? #89

Pompey21 opened this issue May 11, 2024 · 2 comments

Comments

@Pompey21
Copy link

Pompey21 commented May 11, 2024

Hi, I have tried navigating your model via the instructions you have provided but with no success. After completing the set-up, I tried running a simple Text-to-Text inference but with no success. The model outputs repetitive word tokens that do not make sense (image below). After testing the weights for Vicuna and LLaMA separately and realising that this is not the problem I proceeded with checking the code-base. The way I achieved for the model to perform the basic Text-to-Text task was by disabling the LORA weights all together. However, after that I was of course not able to trigger any other modality...

Screenshot 2024-04-29 at 15 03 07
@WenjiaWang0312
Copy link

image
Hi, this is my result. I cannot generate reasonable results as well. Have you solved it?

@Pompey21
Copy link
Author

Hi @WenjiaWang0312 , if you want to resolve this issue you must disable the LoRA weights. However, that will only allow you do to text-to-text and not the cross-modality

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants