Convert popular Deep learning models to TensorRT using C++ API (preferably)
-
Updated
Mar 31, 2021 - C++
Convert popular Deep learning models to TensorRT using C++ API (preferably)
Using TensorRT integration in Tensorflow to perform CNN inference.
In this repository, you will receive information (notebooks) of TensorFlow model optimization using TensorRT and TF-Lite. You will be able to: Understand the fundamentals of optimization using TF-TRT and TF-Lite, deploy deep learning models by reduced precision (FP32, FP16 and INT8) on the inference stage and calibrate the weights
Face mask detector for COVID-19 monitoring implemented on a Jetson Nano. The entire framework performs real-time at a 7 Hz.
Test of tensorRT on a custom net using jetson devices
This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on exercises complimented by tutorials, code snippets, and presentations to help researchers kick-start with NeMo LLM Service and Guardrails.
Machine Learning API built using Fast API.
Hardware accelerated OpenCV, Torch & Tensorrt Ubuntu 20.04 docker images for Jetson Nano containing any python version you need up until the latest 3.12
Research experiments archive for post-training quantization with TensorRT. Submitted and Accepted to IEEE EDGE 2024
Tensorrt implementation for Yolo
Image classification with NVIDIA TensorRT from TensorFlow models.
This repo consists of model optimization using TensorRT package.
Implementation of popular deep learning networks with TensorRT network definition APIs
Yolo v4 Object Detection Model
Add a description, image, and links to the tensorrt topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt topic, visit your repo's landing page and select "manage topics."