-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fail to run v2 with flash attention #140
Comments
Hi! Please try to set |
I added the Ask-Anything/video_chat2/conversation.py Lines 64 to 75 in 078540a
and I get a new error message.
It might because there is a shape mismatch between inputs_embeds (shape = [1, 125]) and attention_mask (shape = [1, 126]) |
Can you simply try not to use |
yes, I was able to run the model without |
I got following error message when I input a 2-mins long video with the default hyperparameter setting (beam search numbers = 1, temperature = 1, video segments = 8) and "Hi" as text input.
The text was updated successfully, but these errors were encountered: