Open deep learning compiler stack for cpu, gpu and specialized accelerators
-
Updated
May 26, 2024 - Python
Open deep learning compiler stack for cpu, gpu and specialized accelerators
High-performance In-browser LLM Inference Engine
Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
TON Foundation invites talent to imagine and realize projects that have the potential to integrate with the daily lives of users.
FlashInfer: Kernel Library for LLM Serving
Solidity compiler for TVM
Client Libraries in 13 languages for TON, GOSH, Venom, Everscale and other TVM blockchains
TVM Documentation in Chinese Simplified / TVM 中文文档
TVM, a deep learning compiler stack for CPUs, GPUs and accelerators. This repository presents some tips to setup TVM and deploy neural network models.
yolort is a runtime stack for yolov5 on specialized accelerators such as tensorrt, libtorch, onnxruntime, tvm and ncnn.
A curated list of awesome inference deployment framework of artificial intelligence (AI) models. OpenVINO, TensorRT, MediaPipe, TensorFlow Lite, TensorFlow Serving, ONNX Runtime, LibTorch, NCNN, TNN, MNN, TVM, MACE, Paddle Lite, MegEngine Lite, OpenPPL, Bolt, ExecuTorch.
Streamline Ethereum, Solana and Tron operations. Effortlessly create transactions, interact with smart contracts, sign, and send transactions for a seamless blockchain experience.
Add a description, image, and links to the tvm topic page so that developers can more easily learn about it.
To associate your repository with the tvm topic, visit your repo's landing page and select "manage topics."