Self-Supervised Noise Embeddings (Self-SNE)
-
Updated
May 30, 2024 - Jupyter Notebook
Self-Supervised Noise Embeddings (Self-SNE)
A client side vector search library that can embed, store, search, and cache vectors. Works on the browser and node. It outperforms OpenAI's text-embedding-ada-002 and is way faster than Pinecone and other VectorDBs.
IntelliSearch is an advanced retrieval-based question-answering and recommendation system that leverages embeddings and a large language model (LLM) to provide accurate and relevant information to users.
Plugin that creates a ChromaDB vector database to work with LM Studio running in server mode!
Unstract's interface to LLMs, Embeddings and VectorDBs.
Opus is a fast and modern LLM playgrounds for all embedding and transformers based models like Gemini, GPT, Llama and more community based models from huggingface, built with react.js for the UI and the backend is being handled by robust and cross-platoform runtime environmnet NodeJS
A binary analysis tool for identifying unknown function names, using a word-2-vec model
PetPS: Supporting Huge Embedding Models with Tiered Memory
Web-ify your word2vec: framework to serve distributional semantic models online
⚡️Framework for fast persistent storage of multiple document embeddings and metadata into Pinecone for production-level RAG.
A CLI chatbot that uses RAG architecture for improving and adapting LLM to specific context. It allows users to ask questions and get response directly from open-source LLMs(OpenAI, MistralAI etc.) or from the information on a website which is provided as context using the RAG architecture.
Generative Representational Instruction Tuning
An elegant hybrid search engine that significantly enhances search precision by seamlessly querying semantically related results using embedding AI models. For experiencing AI models and integrations.
A repository for tackling cloud text pre-trained embeddings, from evaluation to deployment, including fine-tuning and vector stores.
Basics of machine learning is END-TO-END Repository which includes very Basic Machine Learning Models and Notebook
This is the code for our paper "BMRetriever: Tuning Large Language Models as Better Biomedical Text Retrievers".
M3E-Embedder 是一个基于 Docker 的服务,旨在方便地部署和运行 m3e embedding嵌入模型,支持多种嵌入模型快速集成和高效计算。
Testing Embedding Server (Compatible OpenAI API). model from LLaMa/Mistral
Yet another word2vec implementation from scratch
Semantic product search on Databricks
Add a description, image, and links to the embedding-models topic page so that developers can more easily learn about it.
To associate your repository with the embedding-models topic, visit your repo's landing page and select "manage topics."