Llama packs LLM #11712
Unanswered
pjbruno327
asked this question in
Q&A
Llama packs LLM
#11712
Replies: 1 comment
-
You could try: from llama_index.core import Settings Settings.llm = Ollama(model="llama2", request_timeout=700.0) Though I am still hitting a call to OpenAI_API from something else. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
How can the embedding model and LLM be changed when using a llama index pack?
Beta Was this translation helpful? Give feedback.
All reactions