Replies: 1 comment 2 replies
-
https://docs.openwebui.com/tutorial/images should be able to help you out here. Getting a model to describe an image only works with multi-modal LLMs that support images as input. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Maybe I'm misunderstanding this feature but when I upload an image and ask the AI to tell me what the text in the image says, it says the image is not detected.
This is the response I get:
I guess under the hood the image isn't (can't?) be sent to Ollama to understand but if that's the case, what is the proper way to use the attachment feature?
If it should work, what am I doing wrong?
Beta Was this translation helpful? Give feedback.
All reactions