Simple implementation of OpenAI CLIP model in PyTorch.
-
Updated
Apr 17, 2024 - Jupyter Notebook
Simple implementation of OpenAI CLIP model in PyTorch.
Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.
Text2ImageDescription retrieves relevant images from Pascal VOC 2012 dataset using OpenAI CLIP, based on text queries, and generates descriptions using quantized Mistral-7b model.
Sort a folder of images according to their similarity with provided text in your browser (uses a browser-ported version of OpenAI's CLIP model and the web's new File System Access API)
🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
ChatSense - Llama 2 + Code Llama + CLIP based Chatbot
Deep learning pet breed recognition app
An experiment with movie scenes and contrastive learning
Computationally-free personalization at test time for sEMG gesture classification. Fast (gpu/cpu) ninapro API.
CLIP as a service - Embed image and sentences, object recognition, visual reasoning, image classification and reverse image search
Text to Image & Reverse Image Search Engine built upon Vector Similarity Search utilizing CLIP VL-Transformer for Semantic Embeddings & Qdrant as the Vector-Store
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
A list of projects that use OpenAI's CLIP model.
Open AI Clip + Faiss Image Semantic search
KoCLIP: Korean port of OpenAI CLIP, in Flax
GUI to explore large image collections with text queries
Add a description, image, and links to the openai-clip topic page so that developers can more easily learn about it.
To associate your repository with the openai-clip topic, visit your repo's landing page and select "manage topics."