Skip to content

Load local models with GPTQ optimization #126

Closed Answered by pieroit
DanielusG asked this question in Q&A
Discussion options

You must be logged in to vote

We now support both llama-cpp-python and Ollama

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by DanielusG
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants