Skip to content

Latest commit

 

History

History
125 lines (100 loc) · 4.05 KB

DATASET.md

File metadata and controls

125 lines (100 loc) · 4.05 KB

🌟 Instructions for generating dataset we proposed.

Prepare training datasets

Structure

Train

language_vision_interface
├──scripts
├──data
│   ├── image_pairs_train
│   │   ├── Abyssianian_1_cls
│   │   │   ├── Abyssianian_1_0
│   │   │   ├── Abyssianian_1_1
│   │   ├── Abyssianian_2_cls
│   │   │   ├── Abyssianian_2_0
│   │   │   ├── Abyssianian_2_1
│   │   ├── ...
│   │   ├── American_bulldog_100_cls
│   │   │   ├── American_bulldog_100_0
│   │   │   ├── American_bulldog_100_1
│   │   ├── ...
│   │   ├── Abyssianian_1_seg
│   │   │   ├── Abyssianian_1_0
│   │   │   ├── Abyssianian_1_1
│   │   ├── Abyssianian_2_seg
│   │   │   ├── Abyssianian_1_0
│   │   │   ├── Abyssianian_2_1
│   │   ├── ...
│   │   ├── American_bulldog_100_seg
│   │   │   ├── American_bulldog_100_0
│   │   │   ├── American_bulldog_100_1
│   │   ├── ...
│   │   ├── Abyssianian_1_det
│   │   │   ├── Abyssianian_1_0
│   │   │   ├── Abyssianian_1_1
│   │   ├── Abyssianian_2_det
│   │   │   ├── Abyssianian_2_0
│   │   │   ├── Abyssianian_2_1
│   │   ├── ...
│   │   ├── American_bulldog_100_det
│   │   │   ├── American_bulldog_100_0
│   │   │   ├── American_bulldog_100_1
│   │   ├── ...
│   │   ├── bathroom_0001_01_depes
│   │   │   ├── bathroom_0001_0
│   │   │   ├── bathroom_0001_1
│   │   ├── bathroom_0001_02_depes
│   │   │   ├── bathroom_0001_0
│   │   │   ├── bathroom_0001_1
│   │   ├── ...
│   │   ├── living_room_0010_33_depes
│   │   │   ├── living_room_0010_33_0
│   │   │   ├── living_room_0010_33_1


Prepare datasets

We pool all four datasets together and train them at one time.

NYUV2 - Depth estimation

Download the dataset here

Or, you can download the processed dataset follow the instructions here.

MS-COCO - Object Detection

Download the dataset here

ADE20k - Semantic Segmentation

Download the dataset here Download the instance annotation from here

cd ADEChallengeData2016
wget http://sceneparsing.csail.mit.edu/data/ChallengeData2017/annotations_instance.tar
Oxford-IIIT - Classification

Download the dataset here


External dataset for testing:

SUNRGBD - Depth estimation

Download the dataset here and download the split file from this here. We remove NYUv2 part.

PASCAL VOC2012 - Segmentation & Detection

Download the dataset here

We need to transfer the voc format to the coco one by running:

python data/VOCdevkit/VOC2012/voc2coco.py

Build our training data

Next, we are going to process these datasets to build our training data. You can run the following commands.

python dataset_creation/format_dataset.py --save_root <path_to_save> --tasks <vision tasks> --data_root <path_to_dataset>
# specific examples
## coco
python build_data/format_dataset_rp.py --save_root './image_pairs' --tasks ['det'] --data_root './data/coco'