Skip to content

Latest commit

 

History

History

zfnet-512

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

ZFNet-512

Model Download Download (with sample test data) ONNX version Opset version Top-1 accuracy (%) Top-5 accuracy (%)
ZFNet-512 341 MB 320 MB 1.1 3
ZFNet-512 341 MB 320 MB 1.1.2 6
ZFNet-512 341 MB 320 MB 1.2 7
ZFNet-512 341 MB 318 MB 1.3 8
ZFNet-512 341 MB 318 MB 1.4 9
ZFNet-512 333 MB 309 MB 1.9 12 55.97 79.41
ZFNet-512-int8 83 MB 48 MB 1.9 12 55.84 79.33
ZFNet-512-qdq 84 MB 56 MB 1.9 12 55.83 79.42

Compared with the fp32 ZFNet-512, int8 ZFNet-512's Top-1 accuracy drop ratio is 0.23%, Top-5 accuracy drop ratio is 0.10% and performance improvement is 1.78x.

Note

Different preprocess methods will lead to different accuracies, the accuracy in table depends on this specific preprocess method.

The performance depends on the test hardware. Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.

Description

ZFNet-512 is a deep convolutional networks for classification. This model's 4th layer has 512 maps instead of 1024 maps mentioned in the paper.

Dataset

ILSVRC2013

Source

Caffe2 ZFNet-512 ==> ONNX ZFNet-512

Model input and output

Input

gpu_0/data_0: float[1, 3, 224, 224]

Output

gpu_0/softmax_1: float[1, 1000]

Pre-processing steps

Post-processing steps

Sample test data

random generated sampe test data:

  • test_data_set_0
  • test_data_set_1
  • test_data_set_2
  • test_data_set_3
  • test_data_set_4
  • test_data_set_5

Results/accuracy on test set

Quantization

ZFNet-512-int8 and ZFNet-512-qdq are obtained by quantizing fp32 ZFNet-512 model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.

Environment

onnx: 1.9.0 onnxruntime: 1.8.0

Prepare model

wget https://github.com/onnx/models/raw/main/vision/classification/zfnet-512/model/zfnet512-12.onnx

Model quantize

Make sure to specify the appropriate dataset path in the configuration file.

bash run_tuning.sh --input_model=path/to/model \  # model path as *.onnx
                   --config=zfnet512.yaml \
                   --data_path=/path/to/imagenet \
                   --label_path=/path/to/imagenet/label \
                   --output_model=path/to/save

References

Contributors

License

MIT