Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to load local modell??????????? #57

Open
khalilxg opened this issue Dec 24, 2023 · 1 comment
Open

how to load local modell??????????? #57

khalilxg opened this issue Dec 24, 2023 · 1 comment

Comments

@khalilxg
Copy link

No description provided.

@0xM4sk
Copy link

0xM4sk commented Apr 24, 2024

modified core/builder_config.py ---

from llama_index.llms.openai_like import OpenAILike

API_KEY = os.getenv('OPENAI_API_KEY')
BUILDER_LLM = OpenAILike(
api_base="[IP]:1337",
model="[model ID]",
is_chat_model=True,
max_tokens=None,
api_version="v1",
api_key=API_KEY,
)

using this method I was able to perform inference against local models hosted by Jan. Unfortunately my TensorRT Mistral model had streaming issues but I got other models working partially. .streamlit/secrets.toml does seem to need a valid openai API key, im not seeing any usage, but worth noting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants