Why is the number of FP, TP calculated by val.py for a certain class different from the number predicted by detect.py and then calculated? #12992
Labels
question
Further information is requested
Search before asking
Question
I set both confidence and iou to the same in both val.py and detect.py, and I find that the bbox results are the same during prediction, but when I use the results from detect.py to calculate tp and fp, I find that one category doesn't line up with val. In addition, I found that there are some errors when saving the label result of detect.py to txt file, will it cause the above situation when I use it to calculate? Please answer the question, thank you!
The specific case is that there is a class of val.py that calculates an FP of 48 and a TP of 53, whereas my calculations with detect result in an FP of 46 and a TP of 55, which will result in a higher MAP for this class than the val.py runs out.
Additional
No response
The text was updated successfully, but these errors were encountered: