Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the NWPU-Crowd dataset #19

Open
little-seasalt opened this issue Apr 23, 2024 · 1 comment
Open

About the NWPU-Crowd dataset #19

little-seasalt opened this issue Apr 23, 2024 · 1 comment

Comments

@little-seasalt
Copy link

Dear author, I encountered some problems in the process of reproducing the NWPU data set: I screened the best models through the indicators of the validation set. After 1500 rounds of training, the indicators of the validation set reached about 73, and then I tested the test set on the best model, and submitted the results to the official website. I only got an MAE of 112. Do you know what the possible reasons are? Also, I would like to ask what is your approximate indicator on the validation set?Thank you for your answers in advance.

@cxliu0
Copy link
Owner

cxliu0 commented Apr 26, 2024

I think this issue is similar to the UCF-QNRF dataset. For large-scale datasets, it is important to ensure that the training patches contain a sufficient number of people. If many training patches are empty, the training supervision would be weak.

Regarding performance on the validation set, I remember the MAE is between 40 to 50.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants