Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnx model conversion #658

Open
Allwinraj opened this issue Mar 8, 2024 · 5 comments
Open

onnx model conversion #658

Allwinraj opened this issue Mar 8, 2024 · 5 comments
Labels

Comments

@Allwinraj
Copy link

onnx model conversion

how to convert the trained resnet model into onnx

After training the https://github.com/onnx/models/blob/bec48b6a70e5e9042c0badbaafefe4454e072d08/validated/vision/classification/resnet/train_resnet.ipynb models are saved in params folder (1.0000-imagenet-resnet50_v1-0000.params, 1.0000-imagenet-resnet50_v1-symbol.json)

How to convert this model format into onnx?
Any conversion code is available?

@chuqingq
Copy link

@Allwinraj
Copy link
Author

@chuqingq

I can convert the resnet50 model using below code.

`
import numpy as np
import mxnet

mx_symbol = 'params/1.0000-imagenet-resnet18_v1-symbol.json'
mx_params = 'params/1.0000-imagenet-resnet18_v1-0001.params'
onnx_file = 'modelresnet.onnx'

input_shapes = [(1, 3, 224, 224)]
input_types = np.float32

mxnet.onnx.export_model(mx_symbol, mx_params, input_shapes, input_types, onnx_file,dynamic=False)
`
It's worked for me.

@chuqingq
Copy link

@Allwinraj Thank you!

I used some mobilenet code to export onnx model. But it does not work for inferencing.

Code to export mobilenet model to onnx model:

dir = './mobilenet-v2-model/params/'
# Downloaded input symbol and params files
sym = dir+'./1.0000-imagenet-mobilenetv2_1.0-symbol.json'
params = dir+'./1.0000-imagenet-mobilenetv2_1.0-0000.params'

# Standard Imagenet input - 3 channels, 224*224
input_shape = (1,3,224,224)

# Path of the output file
onnx_file = './mobilenet-v2-model/mxnet_exported_imagenet-mobilenetv2_1.0.onnx'

converted_model_path = mx.onnx.export_model(sym, params, [input_shape], np.float32, onnx_file)


from onnx import checker
import onnx


# Check validity of ONNX model
# Load onnx model
model_proto = onnx.load_model(converted_model_path)

# Check if converted ONNX protobuf is valid
checker.check_graph(model_proto.graph)

It works.

Code to inference, from [onxx model imagenet inference notebook]:(https://github.com/onnx/models/blob/main/validated/vision/classification/imagenet_inference.ipynb)

# %% [markdown]
# # Inference Demo for ImageNet Models

# %% [markdown]
# ## Overview
# This notebook can be used for inference on ONNX models trained on **ImageNet** dataset. The demo shows how to use the trained models to do inference in MXNet. Please install the prerequisite packages if not already installed. 
# 
# ## Model Support in This Demo
# 
# * SqueezeNet
# * VGG
# * ResNet
# * MobileNet
# 
# ## Prerequisites
# 
# * Protobuf compiler - `sudo apt-get install protobuf-compiler libprotoc-dev` (required for ONNX. This will work for any linux system. For detailed installation guidelines head over to [ONNX documentation](https://github.com/onnx/onnx#installation))
# * ONNX - `pip install onnx`
# * MXNet - `pip install mxnet-cu90mkl --pre -U` (tested on this version GPU, can use other versions. `--pre` indicates a pre build of MXNet which is required here for ONNX version compatibility. `-U` uninstalls any existing MXNet version allowing for a clean install)
# * numpy - `pip install numpy`
# * matplotlib - `pip install matplotlib`
# 
# In order to do inference with a python script: 
# * Generate the script : In Jupyter Notebook browser, go to File -> Download as -> Python (.py)
# * Run the script: `python imagenet_inference.py`

# %% [markdown]
# ### Import dependencies
# Verify that all dependencies are installed using the cell below. Continue if no errors encountered, warnings can be ignored.

# %%
import mxnet as mx
import matplotlib.pyplot as plt
import numpy as np
from collections import namedtuple
from mxnet.gluon.data.vision import transforms
from mxnet.contrib.onnx.onnx2mx.import_model import import_model
import os

# %% [markdown]
# ### Test Images
# A test image will be downloaded to test out inference. Feel free to provide your own image instead.

# %%
# mx.test_utils.download('https://s3.amazonaws.com/model-server/inputs/kitten.jpg')

# %% [markdown]
# ### Download label file for ImageNet
# Download and load synset.txt file containing class labels for ImageNet

# %%
# mx.test_utils.download('https://s3.amazonaws.com/onnx-model-zoo/synset.txt')
with open('./mobilenet-v2-model/synset.txt', 'r') as f:
    labels = [l.rstrip() for l in f]

# %% [markdown]
# ### Import ONNX model
# Import a model from ONNX to MXNet symbols and params

# %%
# Enter path to the ONNX model file
model_path= './mobilenet-v2-model/mxnet_exported_imagenet-mobilenetv2_1.0.onnx'
#model_path= './mobilenet-v2-model/mobilenetv2-12.onnx'
#model_path= './mobilenet-v2-model/mobilenetv2-12-rknn.onnx'
sym, arg_params, aux_params = import_model(model_path)

# %% [markdown]
# ### Read image
# `get_image(path, show=False)` : Read and show the image taking the `path` as input

# %%
Batch = namedtuple('Batch', ['data'])
def get_image(path, show=False):
    img = mx.image.imread(path)
    if img is None:
        return None
    if show:
        plt.imshow(img.asnumpy())
        plt.axis('off')
    return img

# %% [markdown]
# ### Preprocess image
# `preprocess(img)` : Preprocess inference image -> resize to 256x256, take center crop of 224x224, normalize image, add a dimension to batchify the image

# %%
def preprocess(img):   
    transform_fn = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
    img = transform_fn(img)
    img = img.expand_dims(axis=0)
    return img

# %% [markdown]
# ### Predict
# `predict(path)` : Takes `path` of the input image and flag to display input image and prints 5 top predictions

# %%
def predict(path):
    img = get_image(path, show=True)
    img = preprocess(img)
    mod.forward(Batch([img]))
    # Take softmax to generate probabilities
    scores = mx.ndarray.softmax(mod.get_outputs()[0]).asnumpy()
    # print the top-5 inferences class
    scores = np.squeeze(scores)
    a = np.argsort(scores)[::-1]
    for i in a[0:5]:
        print('class=%s ; probability=%f' %(labels[i],scores[i]))

# %% [markdown]
# ### Load the network for inference
# Use `mx.mod.Module` to define the network architecture and bind the parameter values using `mod.set_params`. `mod.bind` tells the network the shape of input and labels to expect.

# %%
# Determine and set context
if len(mx.test_utils.list_gpus())==0:
    ctx = mx.cpu()
else:
    ctx = mx.gpu(0)
# Load module
mod = mx.mod.Module(symbol=sym, context=ctx, label_names=None)
mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], 
         label_shapes=mod._label_shapes)
mod.set_params(arg_params, aux_params, allow_missing=True, allow_extra=True)

# %% [markdown]
# ### Generate predictions
# The top 5 classes (in order) along with the probabilities generated for the image is displayed in the output of the cell below

# %%
# Enter path to the inference image below
img_path = './mobilenet-v2-model/kitten.jpg'
predict(img_path)

It does not work. The result is:

$ python imagenet_inference.py
Traceback (most recent call last):
  File "imagenet_inference.py", line 65, in <module>
    sym, arg_params, aux_params = import_model(model_path)
  File "/home/chuqq/micromamba/envs/python37/lib/python3.7/site-packages/mxnet/contrib/onnx/onnx2mx/import_model.py", line 60, in import_model
    sym, arg_params, aux_params = graph.from_onnx(model_proto.graph, opset_version=model_opset_version)
  File "/home/chuqq/micromamba/envs/python37/lib/python3.7/site-packages/mxnet/contrib/onnx/onnx2mx/import_onnx.py", line 116, in from_onnx
    inputs = [self._nodes[i] for i in node.input]
  File "/home/chuqq/micromamba/envs/python37/lib/python3.7/site-packages/mxnet/contrib/onnx/onnx2mx/import_onnx.py", line 116, in <listcomp>
    inputs = [self._nodes[i] for i in node.input]
KeyError: 'mobilenetv20_features_relu60_relu6_min'

I don't know why.

@Allwinraj
Copy link
Author

@chuqingq

Did you try with

import onnxruntime
sess = onnxruntime.InferenceSession('model.onnx')

instead of using

from mxnet.contrib.onnx.onnx2mx.import_model import import_model

Onnxruntime worked for me

@chuqingq
Copy link

@chuqingq

Did you try with

import onnxruntime sess = onnxruntime.InferenceSession('model.onnx')

instead of using

from mxnet.contrib.onnx.onnx2mx.import_model import import_model

Onnxruntime worked for me

@Allwinraj

Sorry, I didn't use onnxruntime.InferenceSession to import model.

I use rknn_toolkit2 to convert onnx model to rknn model, and inference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants