Reference implementation of Mistral AI 7B v0.1 model.
-
Updated
Dec 25, 2023 - Python
Reference implementation of Mistral AI 7B v0.1 model.
Notes on the Mistral AI model
Local retrieval-augmented-generation with Mixtral, Ollama, Chainlit, and Embedchain 🌺🤖
🦙 Free and Open Source Large Language Model (LLM) chatbot web UI and API. Self-hosted, offline capable and easy to setup. Powered by LangChain.
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Transate text from stdin to the current locale using ollama and mixtral:instruct
A Python module for running the Mixtral-8x7B language model with customisable precision and attention mechanisms.
Examples of RAG using Llamaindex with local LLMs in Linux - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
Unofficial .NET SDK for the Mistral AI platform.
LLMs prompt augmentation with RAG by integrating external custom data from a variety of sources, allowing chat with such documents
Collection of templates for the pragmatics engine to test and evaluate the capabilities of large language models.
Bypass restricted and censored content on AI chat prompts 😈
Add a description, image, and links to the mixtral topic page so that developers can more easily learn about it.
To associate your repository with the mixtral topic, visit your repo's landing page and select "manage topics."