Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference_on_dataset get Killed #5241

Open
antof27 opened this issue Mar 16, 2024 · 1 comment
Open

inference_on_dataset get Killed #5241

antof27 opened this issue Mar 16, 2024 · 1 comment

Comments

@antof27
Copy link

antof27 commented Mar 16, 2024

Hi everyone,
I'm currently working on evaluating my model using the validation dataset. However, I'm encountering an error after running the inference_on_dataset method.

Instructions To Reproduce the Issue:

from detectron2.engine import DefaultPredictor
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
from detectron2.config import get_cfg
from detectron2 import model_zoo
import os
import torch
from register_dataset import load_dataset 

# Paths to JSON annotations and image roots
train_json_path = "<path/to/train/json>"
train_image_root = "<path/to/train/images>"
val_json_path = "<path/to/val/json>"
 val_image_root = "<path/to/val/images>"

# Load test dataset
_, _, val_dataset_dicts, val_metadata = load_dataset(train_json_path, train_image_root, val_json_path, val_image_root)

# Initialize configuration
cfg = get_cfg()
cfg.OUTPUT_DIR = "<path/as/destination>"
cfg.DATASETS.TEST = ("validation_set",)
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml"))
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "one_epoch.pth")  
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.01
number_of_classes = len(val_metadata.thing_classes)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = number_of_classes
cfg.MODEL.DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
print("Model device, ", cfg.MODEL.DEVICE)


# Create a predictor
predictor = DefaultPredictor(cfg)
print("Predictor created")

# Set up evaluator
evaluator = COCOEvaluator("validation_set", cfg, False, output_dir=cfg.OUTPUT_DIR)
print("Evaluator created")

# Create validation data loader
validation_loader = build_detection_test_loader(cfg, "validation_set", batch_size=1, num_workers=2)
print("Validation loader created")

# Perform inference on the validation dataset
results = inference_on_dataset(predictor.model, validation_loader, evaluator)
print("Evaluation results:", results)

Where load_dataset is:

def load_dataset(train_json_path, train_image_root, val_json_path, val_image_root):
    # Register COCO instances for training and validation sets
    register_coco_instances("training_set", {}, train_json_path, train_image_root)
    register_coco_instances("validation_set", {}, val_json_path, val_image_root)
    
    # Retrieve dataset dictionaries and metadata for training set
    train_dataset_dicts = DatasetCatalog.get("training_set")
    train_metadata = MetadataCatalog.get("training_set")
    
    # Retrieve dataset dictionaries and metadata for validation set
    val_dataset_dicts = DatasetCatalog.get("validation_set")
    val_metadata = MetadataCatalog.get("validation_set")
    return train_dataset_dicts, train_metadata, val_dataset_dicts, val_metadata

I'm expecting to obtain the predictions on the validation set, but I when I compile it, this error occurs:

Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
Model device,  cuda
Skip loading parameter 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (2861, 1024) in the checkpoint but (1180, 1024) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (2861,) in the checkpoint but (1180,) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (11440, 1024) in the checkpoint but (4716, 1024) in the model! You might want to double check if this is expected.
Skip loading parameter 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (11440,) in the checkpoint but (4716,) in the model! You might want to double check if this is expected.
Some model parameters or buffers are not found in the checkpoint:
roi_heads.box_predictor.bbox_pred.{bias, weight}
roi_heads.box_predictor.cls_score.{bias, weight}
Predictor created
COCO Evaluator instantiated using config, this is deprecated behavior. Please pass in explicit arguments instead.
Evaluator created

Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

Validation loader created
Starting inference on the validation dataset...
Loading and preparing results...
DONE (t=11.53s)
creating index...
index created!
Killed

I don't get why the process is killed. Have anyone had the same issue? Thanks in advance!

@github-actions github-actions bot added the needs-more-info More info is needed to complete the issue label Mar 16, 2024
Copy link

You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template.
The following information is missing: "Your Environment";

@github-actions github-actions bot removed the needs-more-info More info is needed to complete the issue label Mar 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant