Skip to content

opendilab/awesome-multi-modal-reinforcement-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 

Repository files navigation

Awesome Multi-Modal Reinforcement Learning

This is a collection of research papers for Multi-Modal reinforcement learning (MMRL). And the repository will be continuously updated to track the frontier of MMRL. Some papers may not be relevant to RL, but we include them anyway as they may be useful for the research of MMRL.

Welcome to follow and star!

Introduction

Multi-Modal RL agents focus on learning from video (images), language (text), or both, as humans do. We believe that it is important for intelligent agents to learn directly from images or text, since such data can be easily obtained from the Internet.

飞书20220922-161353

Table of Contents

Papers

format:
- [title](paper link) [links]
  - authors.
  - key words.
  - experiment environment.

ICLR 2024

ICLR 2023

  • PaLI: A Jointly-Scaled Multilingual Language-Image Model(notable top 5%)

    • Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut
    • Keyword: amazing zero-shot, language component and visual component
    • ExpEnv: None
  • VIMA: General Robot Manipulation with Multimodal Prompts

    • Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan. NeurIPS Workshop 2022
    • Key Words: multimodal prompts, transformer-based generalist agent model, large-scale benchmark
    • ExpEnv: VIMA-Bench, VIMA-Data
  • MIND ’S EYE: GROUNDED LANGUAGE MODEL REASONING THROUGH SIMULATION

    • Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai
    • Keyword: language2physical-world, reasoning ability
    • ExpEnv: MuJoCo

ICLR 2022

ICLR 2021

ICLR 2019

NeurIPS 2023

NeurIPS 2022

NeurIPS 2021

NeurIPS 2018

ICML 2022

ICML 2019

ICML 2017

CVPR 2022

CoRL 2022

Other

ArXiv

Contributing

Our purpose is to make this repo even better. If you are interested in contributing, please refer to HERE for instructions in contribution.

License

Awesome Multi-Modal Reinforcement Learning is released under the Apache 2.0 license.

About

A curated list of Multi-Modal Reinforcement Learning resources (continually updated)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published