Implementation of Alphafold 3 in Pytorch
-
Updated
May 23, 2024 - Python
Implementation of Alphafold 3 in Pytorch
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling
Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"
Implementation of MagViT2 Tokenizer in Pytorch
Implementation of a single layer of the MMDiT, proposed in Stable Diffusion 3, in Pytorch
Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks, out of Tsinghua / Ant group
Implementation of Q-Transformer, Scalable Offline Reinforcement Learning via Autoregressive Q-Functions, out of Google Deepmind
Implementation of MambaFormer in Pytorch ++ Zeta from the paper: "Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks"
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper
Implementation of Band Split Roformer, SOTA Attention network for music source separation out of ByteDance AI Labs
An implementation of local windowed attention for language modeling
Omni-Modality Processing, Understanding, and Generation
Transformers, including the T5 and MarianMT, enabled effective understanding and generating complex programming codes. Consequently, they can help us in Data Security field. Let's see how!
PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"
Implementation of "PaLM2-VAdapter:" from the multi-modal model paper: "PaLM2-VAdapter: Progressively Aligned Language Model Makes a Strong Vision-language Adapter"
My implementation of the model KosmosG from "KOSMOS-G: Generating Images in Context with Multimodal Large Language Models"
Add a description, image, and links to the attention-mechanisms topic page so that developers can more easily learn about it.
To associate your repository with the attention-mechanisms topic, visit your repo's landing page and select "manage topics."