-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the setup for training "from scratch"? #15
Comments
Thanks for your interest! It is from the stage-1 checkpoint. |
Thanks! That lines up with my small-scale experiments. On another note, did you try training from "real" scratch where all weights are initialized randomly? That gave me the best results, interestingly. The LAVIS "from real scratch" initialization loads some bert-base weights though, which did not work well for me. |
Sorry for the late reply. There is something wrong with my mailbox. So, I missed some mails... Actually I do not try your setting (from scratch). You mean that the QFormer randomly initialized will be better than bert-base initialized. It is really a interesting phenomenon. Is the result of the stage2 or stage1? |
Yes, for me, random initialization worked better than bert-base (and similarly well to stage 1) when training with the LLM for stage 2. |
Hi,
maybe a simple question but I can't find it in your paper: the models you train "from scratch", how is the Q-Former initialized there? Are you using the stage-1 checkpoint from BLIP2 or is it from the very start with random initialization?
Thank you for your help!
The text was updated successfully, but these errors were encountered: