Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cifar10 flops higher than cifar100 on DenseNet(40% pruned) #8

Open
Sirius083 opened this issue May 28, 2019 · 1 comment
Open

cifar10 flops higher than cifar100 on DenseNet(40% pruned) #8

Sirius083 opened this issue May 28, 2019 · 1 comment

Comments

@Sirius083
Copy link

Thanks for your great work, I have a small question related with calculating flops
In paper Table 1
cifar10 DenseNet-40 (40% Pruned), model FLOPs is 3.8110^8
cifar100 DenseNet-40 (40% Pruned), model FLOPS is 3.71
10^8
Since cifar100 has 100 classes , while cifar10 has 10 classes
Why is the flops in cifar10 higher than flops in cifar100 in the same model
Thanks in advance

@liuzhuang13
Copy link
Owner

Because these are two different models, and the algorithm prunes different part of the networks. Even if you prune a fixed amount of channels (40% in this case), FLOPs will be dependent on where you prune. For example, if you prune early layers more, you'll reduce more FLOPs since they have larger activation maps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants