We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
You can continue the conversation there. Go to discussion →
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ppocrv4 检测模型转tensorrt性能下降严重,能否给出解决方案?多谢
The text was updated successfully, but these errors were encountered:
可以详细说明一下问题吗?
Sorry, something went wrong.
PaddleOCRv4 server端文字检测模型转ONNX准确率并未下降,ONNX转TRT后精度下降70%左右,可以参照之前有几个开发者提出了同样问题: #10917 #11419 麻烦帮忙解答一下,非常感谢 转换的命令:
能否帮忙解答一下?多谢
可以尝试使用Fastdeploy进行推理,切换后端为trt:
python infer.py --det_model ch_PP-OCRv4_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv4_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt
参考:https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/ocr/PP-OCR/cpu-gpu/python
但需要注意,目前没有在FP16上做过精度验证,确实可能存在精度损失的情况,建议先使用FP32。
可以尝试使用Fastdeploy进行推理,切换后端为trt: python infer.py --det_model ch_PP-OCRv4_det_infer --cls_model ch_ppocr_mobile_v2.0_cls_infer --rec_model ch_PP-OCRv4_rec_infer --rec_label_file ppocr_keys_v1.txt --image 12.jpg --device gpu --backend trt 参考:https://github.com/PaddlePaddle/FastDeploy/tree/develop/examples/vision/ocr/PP-OCR/cpu-gpu/python 但需要注意,目前没有在FP16上做过精度验证,确实可能存在精度损失的情况,建议先使用FP32。
谢谢您的答复,检测模型转tensorRT,FP32的看起来也是有问题的,您那边能验证FP32是无问题的?
tink2123
No branches or pull requests
ppocrv4 检测模型转tensorrt性能下降严重,能否给出解决方案?多谢
The text was updated successfully, but these errors were encountered: