An optimized implementation of masked autoencoders (MAEs)
-
Updated
Mar 20, 2024 - Python
An optimized implementation of masked autoencoders (MAEs)
An optimized implementation of spatiotemporal masked autoencoders
Investigate possibilities for Vision Transformers with multiscale grids
TorchGeo: datasets, transforms, and models for geospatial data
Project for Computer Vision course @ MSc in Artificial Intelligence, UniVR
Change detection on satellite images with masked autoencoders.
Train MAE on Kaggle 2 GPUs (T4x2), Log to Wandb
The code for the paper "Contrastive Masked Autoencoders for Self-Supervised Video Hashing" (AAAI'23)
Reproducing the MET framework with PyTorch
PyTorch implementation of MADE
Generative modeling and representation learning through reconstruction
PyTorch wrapper for Deep Density Estimation Models
code for "AdPE: Adversarial Positional Embeddings for Pretraining Vision Transformers via MAE+"
Extraction of deep features/representation of birds by deep learning algorithms.
HSIMAE: A Unified Masked Autoencoder with large-scale pretraining for Hyperspectral Image Classification
A Vector Quantized Masked AutoEncoder for speech emotion recognition
Codebase for Imperial MSc AI Individual Project - Self-Supervised Learning for Audio Inference
Official code for CVPR2024 “VideoMAC: Video Masked Autoencoders Meet ConvNets”
Official codebase for "Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense Interactions through Masked Modeling".
Official implementation of Matrix Variational Masked Autoencoder (M-MAE) for ICML paper "Information Flow in Self-Supervised Learning" (https://arxiv.org/abs/2309.17281)
Add a description, image, and links to the masked-autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the masked-autoencoder topic, visit your repo's landing page and select "manage topics."