Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is prepare_model mandatory to use? #2819

Open
CangHaiQingYue opened this issue Mar 11, 2024 · 1 comment
Open

Is prepare_model mandatory to use? #2819

CangHaiQingYue opened this issue Mar 11, 2024 · 1 comment

Comments

@CangHaiQingYue
Copy link

CangHaiQingYue commented Mar 11, 2024

Hi, I’m trying to apply AIMET on YOLO v5. I found that after using ‘prepare_model’ , during the forward phase of fine-tune, the input image will not follow the model's forward function(https://github.com/ultralytics/yolov5/blob/956be8e642b5c10af4a1533e09084ca32ff4f21f/models/yolo.py#L126), which can lead to other errors.

And another question is when I set "per_channel_quantization":"True" in QuantizationSimModel's config, It means both activation and params are in per_channel model, or just params in per_channel model.

@e-said
Copy link

e-said commented Mar 15, 2024

Hello @CangHaiQingYue

model preparer is highly recommended in aimet, you can find here some more info on this API.
Didn't get the issue you are facing, but probably partial could help you to freeze some arguments before the forward.

The per_channel_quantization is for params

Regards,
Saïd

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants