Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pythorch to onnx conversion #155

Open
Bea07 opened this issue Jan 16, 2023 · 1 comment
Open

pythorch to onnx conversion #155

Bea07 opened this issue Jan 16, 2023 · 1 comment

Comments

@Bea07
Copy link

Bea07 commented Jan 16, 2023

Hi,
I have tried to convert a sat2map model to onnx and I found a relevant decrease of accuracy.
Anyone with some advice, suggestions and if you experienced something similar.
I have tried many tests, different parameters for torch.onnx.export() but no improvements.
During training phase I specified --norm istance, hence the reported batch normalization problem is excluded.
Generator network is unet_256.
Have you come into something similar?
thks
Bea

@Bea07
Copy link
Author

Bea07 commented Mar 17, 2023

The issue of accuracy loss is due to batch normalization layer. BN in inference mode performs differently than training.
The test.py script is not launching an inference however you set --eval parameter at the command line. In inference mode the pytorch function model.eval() is called (a step that is needed to export in onnx too) and it changes the behave of BN. If your B size is 1 at inference, if possible retrain your model with instance normalization, this can solve this difference of outputs. It worked for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant