You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear author, I encountered some problems in the process of reproducing the NWPU data set: I screened the best models through the indicators of the validation set. After 1500 rounds of training, the indicators of the validation set reached about 73, and then I tested the test set on the best model, and submitted the results to the official website. I only got an MAE of 112. Do you know what the possible reasons are? Also, I would like to ask what is your approximate indicator on the validation set?Thank you for your answers in advance.
The text was updated successfully, but these errors were encountered:
I think this issue is similar to the UCF-QNRF dataset. For large-scale datasets, it is important to ensure that the training patches contain a sufficient number of people. If many training patches are empty, the training supervision would be weak.
Regarding performance on the validation set, I remember the MAE is between 40 to 50.
Dear author, I encountered some problems in the process of reproducing the NWPU data set: I screened the best models through the indicators of the validation set. After 1500 rounds of training, the indicators of the validation set reached about 73, and then I tested the test set on the best model, and submitted the results to the official website. I only got an MAE of 112. Do you know what the possible reasons are? Also, I would like to ask what is your approximate indicator on the validation set?Thank you for your answers in advance.
The text was updated successfully, but these errors were encountered: