Evaluate robustness of adaptation methods on large vision-language models
-
Updated
Aug 23, 2023 - Shell
Evaluate robustness of adaptation methods on large vision-language models
Unofficial implementation for Sigmoid Loss for Language Image Pre-Training
VTC: Improving Video-Text Retrieval with User Comments
Bias-to-Text: Debiasing Unknown Visual Biases through Language Interpretation
📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)
VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix (ICML 2022)
Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning
Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model
[NeurIPS 2023] Bootstrapping Vision-Language Learning with Decoupled Language Pre-training
Multi-Aspect Vision Language Pretraining - CVPR2024
A codebase for flexible and efficient Image Text Representation Alignment
Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)
Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]
Official repository for "CLIP model is an Efficient Continual Learner".
Demographic Bias of Vision-Language Foundation Models in Medical Imaging
SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models
Recognize Any Regions
[CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》
This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have been cited and discussed in the survey just accepted https://dl.acm.org/doi/abs/10.1145/3617833 .
Add a description, image, and links to the vision-language-pretraining topic page so that developers can more easily learn about it.
To associate your repository with the vision-language-pretraining topic, visit your repo's landing page and select "manage topics."