llama2
Here are 801 public repositories matching this topic...
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
-
Updated
May 10, 2024 - TypeScript
Build AI Chatbot in minutes with Sendbird Chatbot Widget.
-
Updated
May 10, 2024 - TypeScript
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
-
Updated
May 10, 2024 - Dart
🤖 Collect practical AI repos, tools, websites, papers and tutorials on AI. 实用的AI百宝箱 💎
-
Updated
May 10, 2024 - Ruby
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
-
Updated
May 10, 2024 - Python
🤯 Lobe Chat - an open-source, modern-design LLMs/AI chat framework. Supports Multi AI Providers( OpenAI / Claude 3 / Gemini / Ollama / Bedrock / Azure / Mistral / Perplexity ), Multi-Modals (Vision/TTS) and plugin system. One-click FREE deployment of your private ChatGPT chat application.
-
Updated
May 10, 2024 - TypeScript
Run any open-source LLMs, such as Llama 2, Mistral, as OpenAI compatible API endpoint in the cloud.
-
Updated
May 10, 2024 - Python
An efficient, flexible and full-featured toolkit for fine-tuning large models (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
-
Updated
May 10, 2024 - Python
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
-
Updated
May 10, 2024 - Python
⚡️Open-source AI LangChain-like RAG (Retrieval-Augmented Generation) knowledge database with web UI and Enterprise SSO⚡️, supports OpenAI, Azure, LLaMA, Google Gemini, HuggingFace, Claude, Grok, etc., chat bot demo: https://demo.casibase.com, admin UI demo: https://demo-admin.casibase.com
-
Updated
May 10, 2024 - Go
-
Updated
May 10, 2024 - Ruby
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
-
Updated
May 10, 2024 - Python
LLM interface that interacts with the LLM service
-
Updated
May 10, 2024 - TypeScript
A FastAPI service for semantic text search using precomputed embeddings and advanced similarity measures, with built-in support for various file types through textract.
-
Updated
May 9, 2024 - Python
PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"
-
Updated
May 10, 2024 - Python
Design, conduct and analyze results of AI-powered surveys and experiments. Simulate social science and market research with large numbers of AI agents and LLMs.
-
Updated
May 9, 2024 - Python
Improve this page
Add a description, image, and links to the llama2 topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama2 topic, visit your repo's landing page and select "manage topics."