🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
-
Updated
Jan 23, 2024 - Python
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
Run OpenAI's CLIP model on iOS to search photos.
Simple implementation of OpenAI CLIP model in PyTorch.
视觉UI分析工具
[NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.
[ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models
Semantic Search demo featuring UForm, USearch, UCall, and StreamLit, to visual and retrieve from image datasets, similar to "CLIP Retrieval"
根据文本描述搜索本地图片的工具,powered by Rust + candle + CLIP
[ICCV2023] Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer
Semantic Emoji Search Plugin for FiftyOne
Traverse the space of concepts with a multi-modal similarity index in FiftyOne
OpenAI's CLIP neural network
The most impactful papers related to contrastive pretraining for multimodal models!
[ NeurIPS 2023 R0-FoMo Workshop ] Official Codebase for "Estimating Uncertainty in Multimodal Foundation Models using Public Internet Data"
A Light weight deep learning model with with a web application to answer image-based questions with a non-generative approach for the VizWiz grand challenge 2023 by carefully curating the answer vocabulary and adding linear layer on top of Open AI's CLIP model as image and text encoder
Youtube video moment searcher by text or photo
Text to image search & Image Similarity Search using @typesense
This repository contains research work on Adversarial Robustness Analysis for Deep Models.
Generation of faces, numbers and images...And Stable-Diffusion Inpainting through Segmentation through SAM and CLIP Model
Add a description, image, and links to the clip-model topic page so that developers can more easily learn about it.
To associate your repository with the clip-model topic, visit your repo's landing page and select "manage topics."