Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorRT is slower than pytorch #12994

Closed
1 task done
namogg opened this issue May 9, 2024 · 7 comments
Closed
1 task done

TensorRT is slower than pytorch #12994

namogg opened this issue May 9, 2024 · 7 comments
Labels
question Further information is requested

Comments

@namogg
Copy link

namogg commented May 9, 2024

Search before asking

Question

I inference YOLOv8n pose estimation model. The pytorch model predict batchsize 4 is around 20ms and tensorrt is 25ms.
This is my convert setting

model.export(format = 'engine', dynamic = False, batch = 4, half = True, imgsz = 640)

TensorRT version 8.6
Pytorch 2.2.1
Cuda 11.8
Ultralytics 8.1.47

Converting log

WARNING ⚠️ TensorRT requires GPU export, automatically assigning device=0
Ultralytics YOLOv8.1.47 🚀 Python-3.11.8 torch-2.2.1+cu118 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB)
YOLOv8n-pose summary (fused): 187 layers, 3289964 parameters, 0 gradients, 9.2 GFLOPs

PyTorch: starting from 'detection/model/yolov8n-pose.pt' with input shape (4, 3, 640, 640) BCHW and output shape(s) (4, 56, 8400) (6.5 MB)

ONNX: starting export with onnx 1.15.0 opset 17...
ONNX: simplifying with onnxsim 0.4.36...
WARNING: failed to run "Add" op (name is "/model.22/Add"), skip...
WARNING: failed to run "Div" op (name is "/model.22/Div"), skip...
WARNING: failed to run "Mul" op (name is "/model.22/Mul_1"), skip...
WARNING: failed to run "Add" op (name is "/model.22/Add"), skip...
WARNING: failed to run "Div" op (name is "/model.22/Div"), skip...
WARNING: failed to run "Mul" op (name is "/model.22/Mul_1"), skip...
ONNX: simplifier failure: Nodes in a graph must be topologically sorted, however input '/model.22/Div_output_0' of node: 
name: /model.22/Slice OpType: Slice
 is not output of any previous nodes.
ONNX: export success ✅ 1.6s, saved as 'detection/model/yolov8n-pose.onnx' (12.9 MB)

TensorRT: starting export with TensorRT 8.6.1...
[05/09/2024-23:14:32] [TRT] [I] [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 561, GPU 1463 (MiB)
[05/09/2024-23:14:39] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +1461, GPU +264, now: CPU 2098, GPU 1727 (MiB)
[05/09/2024-23:14:39] [TRT] [I] ----------------------------------------------------------------
[05/09/2024-23:14:39] [TRT] [I] Input filename:   detection/model/yolov8n-pose.onnx
[05/09/2024-23:14:39] [TRT] [I] ONNX IR version:  0.0.8
[05/09/2024-23:14:39] [TRT] [I] Opset version:    17
[05/09/2024-23:14:39] [TRT] [I] Producer name:    pytorch
[05/09/2024-23:14:39] [TRT] [I] Producer version: 2.2.1
[05/09/2024-23:14:39] [TRT] [I] Domain:           
[05/09/2024-23:14:39] [TRT] [I] Model version:    0
[05/09/2024-23:14:39] [TRT] [I] Doc string:       
[05/09/2024-23:14:39] [TRT] [I] ----------------------------------------------------------------
[05/09/2024-23:14:39] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT: input "images" with shape(4, 3, 640, 640) DataType.FLOAT
TensorRT: output "output0" with shape(4, 56, 8400) DataType.FLOAT
TensorRT: building FP16 engine as detection/model/yolov8n-pose.engine
[05/09/2024-23:14:39] [TRT] [I] Graph optimization time: 0.0326495 seconds.
[05/09/2024-23:14:39] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.

Additional

No response

@namogg namogg added the question Further information is requested label May 9, 2024
Copy link
Contributor

github-actions bot commented May 9, 2024

👋 Hello @namogg, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

@glenn-jocher
Copy link
Member

Hey there! Thanks for sharing the details of your YOLOv8n pose estimation model deployment.

Based on your log, it seems there are several warnings during the ONNX conversion process which might impact the performance when using TensorRT. These warnings indicate operations that failed to execute could be causing inefficiencies in the model when run using TensorRT. Also, consider that TensorRT and PyTorch may have differences in handling certain operations or optimizations.

Here are a couple of suggestions:

  1. Look into the warnings thrown during the ONNX export. Solving these might improve the TensorRT performance.
  2. Experiment with different settings during the export process (like changing dynamic settings or tensor precision).

Since model optimization can be quite specific to the operations used and hardware architecture, sometimes it may require a bit of fine-tuning to get the best performance out of TensorRT.

I hope this helps! If you need more detailed guidance, feel free to ask! 🚀

@namogg
Copy link
Author

namogg commented May 10, 2024

Thanks for your respond, the warning doest happend when i try to convert onnx sperately and use trtexec to convert to TensorRT. i cant inference. Is there any solution to this.

Loading detection/model/yolo.engine for TensorRT inference...
Traceback (most recent call last):
  File "/home/namogg/Grab And Go/main.py", line 14, in <module>
    main()
  File "/home/namogg/Grab And Go/main.py", line 11, in main
    engine.run()
  File "/home/namogg/Grab And Go/engine.py", line 45, in run
    self.run_predict()
  File "/home/namogg/Grab And Go/engine.py", line 86, in run_predict
    pose_list = self.pose_estimation_model.extract_keypoints(combined_frames,sources = camera_ids)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/namogg/Grab And Go/detection/pose.py", line 45, in extract_keypoints
    results = self.model.predict(frames,show = False, save = False,verbose = False)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/namogg/anaconda3/envs/layout/lib/python3.11/site-packages/ultralytics/engine/model.py", line 445, in predict
    self.predictor.setup_model(model=self.model, verbose=is_cli)
  File "/home/namogg/anaconda3/envs/layout/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 297, in setup_model
    self.model = AutoBackend(
                 ^^^^^^^^^^^^
  File "/home/namogg/anaconda3/envs/layout/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/namogg/anaconda3/envs/layout/lib/python3.11/site-packages/ultralytics/nn/autobackend.py", line 235, in __init__
    metadata = json.loads(f.read(meta_len).decode("utf-8"))  # read metadata
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8c in position 12: invalid start byte

@namogg
Copy link
Author

namogg commented May 10, 2024

I have set simplicity = False, and convert successfully without any warning. The result is still slower than Pytorch. Can you suggest any solution?

@glenn-jocher
Copy link
Member

@namogg hello! If converting without warnings still results in slower TensorRT performance compared to PyTorch, you might want to consider the following adjustments:

  1. Verify that TensorRT is utilizing all available optimizations, such as layer fusion, precision calibration (using FP16 or INT8 where possible), and optimal kernel selection for your specific GPU.

  2. Ensure your GPU driver and TensorRT are updated to their latest versions, as improvements in newer versions might enhance performance.

  3. Experiment with different batch sizes to determine the optimal throughput for TensorRT on your hardware setup.

Each model and hardware combination might require unique tweaks to fully optimize, so these steps could help pinpoint more effective configurations. Keep experimenting! 🚀

@namogg
Copy link
Author

namogg commented May 13, 2024

I havent solve the problem yet but thanks for your support

@namogg namogg closed this as completed May 13, 2024
@glenn-jocher
Copy link
Member

You're welcome! Keep experimenting with the settings, and if there's anything more we can help with, don't hesitate to reach out. Best of luck with your project! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants