Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement using llamacpp as LLM model #22

Open
adeelhasan19 opened this issue Nov 24, 2023 · 4 comments
Open

implement using llamacpp as LLM model #22

adeelhasan19 opened this issue Nov 24, 2023 · 4 comments

Comments

@adeelhasan19
Copy link

i am trying to implement using open source llm model with llamacpp but getting this error

"ValueError: Must pass in vector index for CondensePlusContextChatEngine."
i am new to llamaindex also can anyone help me what exactly i need to configure in order to run the RAGs

@jerryjliu
Copy link
Contributor

see our customization tutorial here (specifically the part about customizing LLMs): https://docs.llamaindex.ai/en/latest/getting_started/customization.html

also llms: https://docs.llamaindex.ai/en/latest/module_guides/models/llms.html

@cocoa126
Copy link

Try ask chat-gpt 4.0

@cocoa126
Copy link

i am trying to implement using open source llm model with llamacpp but getting this error

"ValueError: Must pass in vector index for CondensePlusContextChatEngine." i am new to llamaindex also can anyone help me what exactly i need to configure in order to run the RAGs

Try ask gpt4

@khalilxg
Copy link

@adeelhasan19 did u successfully loaded local llm with llamacpp ??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants
@jerryjliu @khalilxg @adeelhasan19 @cocoa126 and others