Group images by provided labels using OpenAI/CLIP
-
Updated
Jun 10, 2023 - Python
Group images by provided labels using OpenAI/CLIP
GUI to explore large image collections with text queries
Visual Search with OpenAI Clip
SpaceVector is a platform for semantic search on satellite images using state of the art AI. It aims to support the use of satellite images.
ChatSense - Llama 2 + Code Llama + CLIP based Chatbot
An experiment with movie scenes and contrastive learning
Visual and Vision-Language Representation Pre-Training with Contrastive Learning
Text2ImageDescription retrieves relevant images from Pascal VOC 2012 dataset using OpenAI CLIP, based on text queries, and generates descriptions using quantized Mistral-7b model.
Search relevant images using text/image query.
Deep learning pet breed recognition app
Text to Image & Reverse Image Search Engine built upon Vector Similarity Search utilizing CLIP VL-Transformer for Semantic Embeddings & Qdrant as the Vector-Store
Search images by text input with CLIP
Generative models for architecture prose and schematics
CLIFS (CLIP-based Frame Selection) is a Python function that takes in a video file and a text prompt as input, and uses the CLIP (Contrastive Language-Image Pre-training) model to find the frame in the video that is most similar to the given text prompt.
Generation of faces, numbers and images...And Stable-Diffusion Inpainting through Segmentation through SAM and CLIP Model
Recommendation system that searches similar items
OpenAI's CLIP neural network
Add a description, image, and links to the openai-clip topic page so that developers can more easily learn about it.
To associate your repository with the openai-clip topic, visit your repo's landing page and select "manage topics."