New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add auc-roc metric #12023
Add auc-roc metric #12023
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👋 Hello @haooyuee, thank you for submitting a YOLOv5 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is up-to-date with
ultralytics/yolov5
master
branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by runninggit pull
andgit merge master
locally. - ✅ Verify all YOLOv5 Continuous Integration (CI) checks are passing.
- ✅ Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." — Bruce Lee
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
@haooyuee thanks for your contribution! It's great to see the addition of auc-roc metrics to YOLOv5. Your efforts to introduce this common medical image indicator to the field of target detection are appreciated. It's excellent that you've provided a way to generate AUC values based on bounding boxes, akin to the ConfusionMatrix in YOLOv5, allowing for a more granular evaluation. Keep up the good work of expanding the capabilities of YOLOv5! |
hello sir, have you made changes in your yolov5-auc directory, I wanted to calculate the AUC score for my dataset. |
I haven't made changes, but the AUC branch is already three months behind the main branch. You can directly visit my GitHub to get the code: https://github.com/haooyuee/YOLOv5-AUROC-MedDetect |
@haooyuee thanks for reaching out! I haven't made any changes to the |
Thanks sir i have cloned that repo, but it is showing MAUC score 0 on every iteration and one more problem it is using python 2.7 instead if 3.9 . |
@KAKAROT12419 glad you've cloned the repository! Regarding the MAUC score showing as 0 on each iteration, there might be an issue with the dataset or the calculation logic. As for the Python version, the YOLOv5-AUROC-MedDetect repository should indeed be compatible with Python 3.9. You may need to review the code and dataset to address the MAUC score calculation and ensure Python 3.9 compatibility. Let me know if you need further assistance! |
Sir, if the dataset had a problem, their precision and recall calculation would have also caused the problem. The repo you mentioned shows the AUC score in the results, so the AUC score calculation will also be correct. Now, I need clarification about what is wrong. I am confused. |
@KAKAROT12419 i understand your confusion. The AUC score calculation and the precision and recall calculation may have different underlying logic and may not be impacted in the same way by dataset issues. I recommend reviewing the AUC calculation logic in the repository and ensuring that it aligns with your expectations. If you need further clarification, feel free to reach out to the repository owner for additional insight. |
Problems with AUC values of 0 at each iteration: can I ask you if the values within the CONFUSION MATRICES at the end of training all show 0? Since I'm using similar logic to the confusion matrices to filter for matches between predicted and true bounding boxes, the values between the two should correlate with each other. All or individual category AUC scores may indeed appear to be 0 at the beginning of training, and the situation may ease as training time increases. If your data labels are very unbalanced, then rare labels are likely to have an AUC value of 0. I invite you to read class AUROC in metrics.py. It contains the logic for calculating AUC scores. The python version is theoretically the same as yolov5. |
@haooyuee i understand this situation can be perplexing. Regarding the AUC score being 0 at each iteration, it's helpful to check if the values within the CONFUSION MATRICES at the end of training all show 0. The AUC scores can start at 0 and increase as training progresses, especially for rare labels in unbalanced datasets. You can review the logic for calculating AUC scores in the class AUROC in metrics.py for more insight. Also, the Python version used should be consistent with YOLOv5. Let me know if you need any further assistance! |
Yes, Sir, I know why the AUC score is 0. I am working on a chest X-ray dataset in which we have to detect nodules, and basically, I have only one class. But my confusion matrix value of true negative is 0. That's why the AUC score is 0. Will changing any hyperparameter affect it? edit-Sir because of true negative The AUC score is 0. |
@KAKAROT12419 i understand the challenge you're facing. In the context of a chest X-ray dataset with only one class and a true negative confusion matrix value of 0, it's expected for the AUC score to be 0. Since the AUC score heavily depends on true negatives, this can impact the score when working with imbalanced datasets or single-class detection. It's unlikely that changing hyperparameters will affect this, as it's a fundamental characteristic of the dataset. If you have further questions or need assistance with any other aspect, feel free to let me know. |
@KAKAROT12419 |
Sir this is the line from the research paper which I am following" We post-process all bounding box predictions by first applying |
@KAKAROT12419 yes, you are correct. The research paper specifies an intersection over union (IOU) threshold of 20%, which means that if two bounding boxes overlap by an IOU greater than 20%, the box with the lower prediction score is removed. Additionally, all bounding boxes with a predicted score below 0.1 are removed after the ensemble process. You may want to consider adjusting the IOU threshold and predicted score threshold in your post-processing to align with the specifications in the research paper. Let me know if you need further assistance! |
Thank you so much , Sir for the guidance. I want one thing to clear out I need to change the value of iou to 0.20, and the threshold value will be the same. |
@KAKAROT12419 you're welcome! If you need to change the value of IOU to 0.20 while keeping the threshold value the same, it seems aligned with the specifications from the research paper. Feel free to proceed with this adjustment. If you have any further questions or need additional assistance, don't hesitate to ask! |
Thank you so much, Sir, I wanted to ask do I need to change all these iou values, |
@KAKAROT12419 You're welcome! Yes, to align the IOU threshold with the specifications from the research paper, you should update all instances of the IOU threshold values in the code, including those in val.py and train.py, as you've mentioned. Please ensure that the IOU threshold values are consistently updated across all relevant sections of the code. If you have any other queries or need further assistance, feel free to ask! |
Sir, I wanted to ask about one issue of yolov5. I am training my dataset on yolov5 on chest X-rays and I have only one class the problem is that in the confusion matrix, it is showing background class false positives as 1 and true negatives as 0. My precision and recall are 0.67 and 0.61 respectively, so it is a bug or what and there is any way to resolve it, I think because of this Mauc score is 0. |
@KAKAROT12419 It seems that the issue with the confusion matrix displaying false positives for the background class and true negatives as 0 might be related to how the single class dataset is being handled. This could indeed impact the MAUC score. You could consider modifying the dataset handling or the model inputs to ensure that there is proper segmentation of the classes. Additionally, analysing the ground truth annotations and the model outputs might provide insights into the cause of the issue. If you need further assistance or have other questions, feel free to ask! |
Hello sir, I am again here to disturb you. I have tested this yolov5 auc-roc curve repo on a different dataset which is a single class dataset. On running the model again in a single class dataset mAuc score is 0, i think there is some mistake in the implementation of auc-roc logic. |
@KAKAROT12419 we continuously appreciate your feedback and will review the implementation to ensure it aligns with single-class datasets. Thank you for bringing this to our attention. We will investigate the issue further and make any necessary adjustments to the AUC-ROC logic for single-class datasets. Thank you for your patience. |
Thank you so much sir for helping. I will be waiting for the changes in the code. |
@KAKAROT12419 you're very welcome! We highly value your input and will work diligently to address the issue. If you have any more questions or need further assistance, feel free to ask. Thank you for your understanding and patience. |
👋 Hello there! We wanted to let you know that we've decided to close this pull request due to inactivity. We appreciate the effort you put into contributing to our project, but unfortunately, not all contributions are suitable or aligned with our product roadmap. We hope you understand our decision, and please don't let it discourage you from contributing to open source projects in the future. We value all of our community members and their contributions, and we encourage you to keep exploring new projects and ways to get involved. For additional resources and information, please see the links below:
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Issue: #2782 #2469
Summary
Added roc metrics:
In LOG, will update mAUC value at each validation epoch. Then added a new AUC column at the end of the training.
When the training is complete, we will generate a polar_chart figure of AUC and a auroc_curve figure.
Taking the vindr CXR dataset as an example, the results of the two figures are:
At the same time, during experiment uses wandb, and the ROC change curve of each class will be update during training.
The auc value is one of the common indicators in the field of medical images, but it is rarely used in the field of target detection. I managed to generate auc values which, like ConfusionMatrix in yolov5, are based on boundingbox instead of image level. Therefore, it will be affected by the two parameters of conf and iou_thres.
copilot:all
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
📊 Key Changes
process_batch
andout
in theAUROC
class to compute AUC-ROC scores.train.py
to include AUC-ROC calculation in the training loops.utils/loggers/__init__.py
to support logging the AUC metric.🎯 Purpose & Impact
The integration of AUC-ROC offers users insight into the performance of the classification model beyond the existing mAP (mean Average Precision) metric, considering the balance between true positive and false positive rates. Its impacts are:
🌟 Summary
"🚀 AUC-ROC metric added to YOLOv5’s validation, providing an in-depth performance analysis tool for model evaluation!"