Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Export to onnx #29

Open
petropetropetro opened this issue Jul 21, 2023 · 3 comments
Open

Export to onnx #29

petropetropetro opened this issue Jul 21, 2023 · 3 comments

Comments

@petropetropetro
Copy link

Hi, I try to export your model to onnx.
Did you try to do something like this?
Now I`m stuck with "RuntimeError: Expected a sequence type, but received a non-iterable type in graph output index 0". Is this because flow_preds is a dic, or can it be related to something else?

@petropetropetro
Copy link
Author

Added more context
torch.onnx.export(model_without_ddp, # model being run
args=(right, left, 'self_swin2d_cross_swin1d',
[2, 8], [-1, 4],
[-1, 1], 3,
False, 'stereo', None, None,
1. / 0.5, 1. / 10, 64,
False, False), # model input (or a tuple for multiple inputs)
f="super_resolution.onnx", # where to save the model (can be a file or file-like object)
verbose=True,
export_params=True, # store the trained parameter weights inside the model file
opset_version=16, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['img0', 'img1', 'attn_type',
'attn_splits_list', 'corr_radius_list',
'prop_radius_list', 'num_reg_refine',
'pred_bidir_flow' , 'task', 'intrinsics', 'pose',
'min_depth', 'max_depth', 'num_depth_candidates',
'depth_from_argmax', 'pred_bidir_depth'], # the model's input names
output_names = ['flow_preds'])

RuntimeError Traceback (most recent call last)
Cell In[21], line 1
----> 1 torch.onnx.export(model_without_ddp, # model being run
2 args=(right, left, 'self_swin2d_cross_swin1d',
3 [2, 8], [-1, 4],
4 [-1, 1], 3,
5 False, 'stereo', None, None,
6 1. / 0.5, 1. / 10, 64,
7 False, False), # model input (or a tuple for multiple inputs)
8 f="super_resolution.onnx", # where to save the model (can be a file or file-like object)
9 verbose=True,
10 export_params=True, # store the trained parameter weights inside the model file
11 opset_version=16, # the ONNX version to export the model to
12 do_constant_folding=True, # whether to execute constant folding for optimization
13 input_names = ['img0', 'img1', 'attn_type',
14 'attn_splits_list', 'corr_radius_list',
15 'prop_radius_list', 'num_reg_refine',
16 'pred_bidir_flow' , 'task', 'intrinsics', 'pose',
17 'min_depth', 'max_depth', 'num_depth_candidates',
18 'depth_from_argmax', 'pred_bidir_depth'], # the model's input names
19 output_names = ['flow_preds'])

File c:\ProgramData\Anaconda3\lib\site-packages\torch\onnx\utils.py:506, in export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, custom_opsets, export_modules_as_functions)
188 @_beartype.beartype
189 def export(
190 model: Union[torch.nn.Module, torch.jit.ScriptModule, torch.jit.ScriptFunction],
(...)
206 export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]] = False,
207 ) -> None:
208 r"""Exports a model into ONNX format.
209
210 If model is not a :class:torch.jit.ScriptModule nor a
(...)
503 All errors are subclasses of :class:errors.OnnxExporterError.
504 """
--> 506 _export(
507 model,
508 args,
509 f,
510 export_params,
511 verbose,
512 training,
513 input_names,
514 output_names,
515 operator_export_type=operator_export_type,
516 opset_version=opset_version,
517 do_constant_folding=do_constant_folding,
518 dynamic_axes=dynamic_axes,
519 keep_initializers_as_inputs=keep_initializers_as_inputs,
520 custom_opsets=custom_opsets,
521 export_modules_as_functions=export_modules_as_functions,
522 )

File c:\ProgramData\Anaconda3\lib\site-packages\torch\onnx\utils.py:1548, in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, opset_version, do_constant_folding, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, onnx_shape_inference, export_modules_as_functions)
1545 dynamic_axes = {}
1546 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
-> 1548 graph, params_dict, torch_out = _model_to_graph(
1549 model,
1550 args,
1551 verbose,
1552 input_names,
1553 output_names,
1554 operator_export_type,
1555 val_do_constant_folding,
1556 fixed_batch_size=fixed_batch_size,
1557 training=training,
1558 dynamic_axes=dynamic_axes,
1559 )
1561 # TODO: Don't allocate a in-memory string for the protobuf
1562 defer_weight_export = (
1563 export_type is not _exporter_states.ExportTypes.PROTOBUF_FILE
1564 )

File c:\ProgramData\Anaconda3\lib\site-packages\torch\onnx\utils.py:1160, in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, dynamic_axes)
1156 # assign_output_shape pass is not compatible with quantized outputs.
1157 # Quantized outputs are flattened to 3 values in ONNX, while packed as
1158 # single value in PyTorch.
1159 if not any(getattr(out, "is_quantized", False) for out in output_tensors):
-> 1160 _C._jit_pass_onnx_assign_output_shape(
1161 graph,
1162 output_tensors,
1163 out_desc,
1164 GLOBALS.onnx_shape_inference,
1165 is_script,
1166 GLOBALS.export_onnx_opset_version,
1167 )
1169 _set_input_and_output_names(graph, input_names, output_names)
1170 params_dict = _get_named_param_dict(graph, params)

RuntimeError: Expected a sequence type, but received a non-iterable type in graph output index 0

@juandavid212
Copy link

Do you have any updates about this?

@petropetropetro
Copy link
Author

@juandavid212 No, I had found the https://github.com/fateshelled/unimatch_onnx, It was okay for me as PoV design

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants