Jan
An open source alternative to OpenAI that runs on your own computer or server
Popular repositories
-
-
nitro-tensorrt-llm
nitro-tensorrt-llm PublicForked from NVIDIA/TensorRT-LLM
Nitro is an C++ inference server on top of TensorRT-LLM. OpenAI-compatible API. Run blazing fast inference on Nvidia GPUs. Used in Jan
-
Repositories
Showing 10 of 30 repositories
- cortex.llamacpp Public
-
- cortex.tensorrt-llm Public
-
-
- tensorrtllm_backend Public Forked from triton-inference-server/tensorrtllm_backend
The Triton TensorRT-LLM Backend