You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I modified llm_model in blip2_instructBLIP_vicuna7b.yaml to llama-7b model, which originated from https://huggingface.co/baffo32/decapoda-research-llama-7B-hf, but the inference result was null when I rerun InstructBlip, and there will be no error in the middle. If I use vicuna-7b-v1.1, I can reason normally. Does anyone know why? Thanks.
The text was updated successfully, but these errors were encountered:
I modified llm_model in blip2_instructBLIP_vicuna7b.yaml to llama-7b model, which originated from https://huggingface.co/baffo32/decapoda-research-llama-7B-hf, but the inference result was null when I rerun InstructBlip, and there will be no error in the middle. If I use vicuna-7b-v1.1, I can reason normally. Does anyone know why? Thanks.
The text was updated successfully, but these errors were encountered: