Learning Cortical Anomaly through Masked Encoding for Unsupervised Heterogeneity Mapping.
-
Updated
May 29, 2024
Learning Cortical Anomaly through Masked Encoding for Unsupervised Heterogeneity Mapping.
[NeurIPS 2023] Masked Image Residual Learning for Scaling Deeper Vision Transformers
Pre-training a VisionTransformer with Masked Image Modelling for semantic segmentation
[Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)
OpenMMLab Pre-training Toolbox and Benchmark
Official implementation of Matrix Variational Masked Autoencoder (M-MAE) for ICML paper "Information Flow in Self-Supervised Learning" (https://arxiv.org/abs/2309.17281)
[ICML 2023] Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark
[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
Custom groovy scripts for QuaPath
[ICLR2024] Exploring Target Representations for Masked Autoencoders
Official Code of the paper "Cross-Scale MAE: A Tale of Multi-Scale Exploitation in Remote Sensing"
Official PyTorch implementation of MOOD series: (1) MOODv1: Rethinking Out-of-distributionDetection: Masked Image Modeling Is All You Need. (2) MOODv2: Masked Image Modeling for Out-of-Distribution Detection.
[CVPR'23] Hard Patches Mining for Masked Image Modeling
Official codebase for "Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense Interactions through Masked Modeling".
This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"
Code to reproduce experiments from the paper "Continual Pre-Training Mitigates Forgetting in Language and Vision" https://arxiv.org/abs/2205.09357
[ICCV 2023] You Only Look at One Partial Sequence
Pytorch implementation of an energy transformer - an energy-based reccurrent variant of the transformer.
MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning
Add a description, image, and links to the masked-image-modeling topic page so that developers can more easily learn about it.
To associate your repository with the masked-image-modeling topic, visit your repo's landing page and select "manage topics."