You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
When I search for some model: https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384, the model card said it was fine-tuned on ImageNet-1k in timm by Ross Wightman. Though it is directed to some more details on pretrain, the hparams for this finetuning process are hard to find.
Describe the solution you'd like
Maybe we could add the hparams in model.finetune_cfg to provide more useful information?
Describe alternatives you've considered
or maybe the args.yaml file can be provided or linked to the model card?
Additional context
Thank you very much! I found some convnext hparams on https://gist.github.com/rwightman/ee0b02c1e99a0761264d1d1319e95e5b
but only for nano and atto, I'm not sure if they are still a strong hparams for finetuning large models? Should I start my sweep based on these much smaller models hparams?
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
When I search for some model: https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k_384, the model card said it was fine-tuned on ImageNet-1k in timm by Ross Wightman. Though it is directed to some more details on pretrain, the hparams for this finetuning process are hard to find.
Describe the solution you'd like
Maybe we could add the hparams in model.finetune_cfg to provide more useful information?
Describe alternatives you've considered
or maybe the args.yaml file can be provided or linked to the model card?
Additional context
Thank you very much! I found some convnext hparams on https://gist.github.com/rwightman/ee0b02c1e99a0761264d1d1319e95e5b
but only for nano and atto, I'm not sure if they are still a strong hparams for finetuning large models? Should I start my sweep based on these much smaller models hparams?
The text was updated successfully, but these errors were encountered: