Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Could not infer shapes: missing shape for input attention_mask #200

Open
justyna3773 opened this issue Jan 15, 2024 · 0 comments
Open

Comments

@justyna3773
Copy link

Description
Hello,
You mention in the README of the project that tested models include BERT.
I am trying to prepare BERT model for usage with WONNX.
When I try to run
nnx prepare -i ./data/models/bert.onnx ./data/models/bert-prepared.onnx --set batch_size=1
I get the following error:
Setting dimension param 0 (batch_size) to value 1 for /bert/Unsqueeze_output_0 Setting dimension param 0 (batch_size) to value 1 for /bert/Unsqueeze_1_output_0 Setting dimension param 0 (batch_size) to value 1 for /bert/Cast_output_0 Setting dimension param 0 (batch_size) to value 1 for /bert/Sub_output_0 Setting dimension param 0 (batch_size) to value 1 for /bert/Mul_output_0 Setting dimension param 0 (batch_size) to value 1 for /bert/embeddings/word_embeddings/Gather_output_0 Setting dimension param 0 (batch_size) to value 1 for /bert/embeddings/Add_output_0 Setting dimension param 0 (batch_size) to value 1 for input_ids Setting dimension param 0 (batch_size) to value 1 for attention_mask Setting dimension param 0 (batch_size) to value 1 for logits Error: Could not infer shapes: missing shape for input attention_mask

Can you look into it and hint what should I do differently?

To Reproduce
Steps to reproduce the behavior:

  1. Generate ONNX model from HF BERT according to their sample script:
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_id = "bert-base-uncased"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
dummy_model_input = tokenizer("This is a sample", return_tensors="pt")
torch.onnx.export(
    model,
    tuple(dummy_model_input.values()),
    f="bert.onnx",
    input_names=['input_ids', 'attention_mask'],
    output_names=['logits'],
    dynamic_axes={'input_ids': {0: 'batch_size', 1: 'sequence'},
                  'attention_mask': {0: 'batch_size', 1: 'sequence'},
                  'logits': {0: 'batch_size', 1: 'sequence'}},
    do_constant_folding=True,
    opset_version=11,
)
model = onnx.load('bert.onnx') 
  1. Run the command: nnx prepare -i ./data/models/bert.onnx ./data/models/bert-prepared.onnx --set batch_size=1

Expected behavior
Command should generate the prepared model without errors.

Desktop:

  • OS: Windows 10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant