Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only tensor input is valid when converting ONNX model #1593

Open
BinaryWarlock opened this issue Apr 9, 2024 · 5 comments
Open

Only tensor input is valid when converting ONNX model #1593

BinaryWarlock opened this issue Apr 9, 2024 · 5 comments
Assignees
Labels
blocked Should not be tackled right now onnx

Comments

@BinaryWarlock
Copy link

Describe the bug

ERROR burn_import::logger: PANIC => panicked at .../burn-import-0.12.1/src/onnx/dim_inference.rs:403:9:
  Only tensor input is valid

Which is in conv2d_update_outputs.

Adding extra printing, it seems to be taking in a Scalar(Float32). I don't know where that's from.

To Reproduce
Using onnx-clip model which you can grab at https://lakera-clip.s3.eu-west-1.amazonaws.com/clip_image_model_vitb32.onnx (about 336 MiB)

  1. Fix the input with python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param image_batch_size --dim_value 1 clip_image_model_vitb32.onnx clip_image_model_vitb32_batchsize1.onnx
  2. Try importing with burn-input build script
  3. Receive error
@antimora antimora self-assigned this Apr 14, 2024
@antimora antimora added the onnx label Apr 14, 2024
@antimora
Copy link
Collaborator

I will investigate next week.

@antimora
Copy link
Collaborator

The immediate issue is related to casting op, which will be fixed by this PR #1634

Once we merge outstanding PRs related to ONNX, I'll recheck it again.

I also noticed that our Transpose operator is not correctly implemented. Filed a ticket: #1642

@antimora
Copy link
Collaborator

OK. We made progress but I found another bug that's blocking.

@antimora antimora added the blocked Should not be tackled right now label Apr 16, 2024
@BinaryWarlock
Copy link
Author

@antimora Do you happen to know which ops this model is blocked on currently?

@antimora
Copy link
Collaborator

antimora commented May 1, 2024

@antimora Do you happen to know which ops this model is blocked on currently?

I fixed a bunch of limitations we had (mostly related to an older OpSet version) and I added a few Ops but we still lack the following.

We are missing these implementations (mentioned in #1714)

  1. ConstantOfShape
  2. Gather

We need to fix this

  1. Reshape (needs to accept input for the shape).

The whole list of Ops used by the ONNX file.

  1. Add
  2. Cast
  3. Concat
  4. Constant
  5. ConstantOfShape
  6. Conv
  7. Div
  8. Gather
  9. Gemm
  10. MatMul
  11. Mul
  12. Pow
  13. ReduceMean
  14. Reshape
  15. Shape
  16. Sigmoid
  17. Softmax
  18. Split
  19. Sqrt
  20. Sub
  21. Transpose
  22. Unsqueeze

Please note, we might still discover some issues once we implement the missing ops and fix Reshape.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked Should not be tackled right now onnx
Projects
None yet
Development

No branches or pull requests

2 participants