ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
May 27, 2024 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
A retargetable MLIR-based machine learning compiler and runtime toolkit.
SHARK - High Performance Machine Learning Distribution
BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.
Concrete: TFHE Compiler that converts python programs into FHE equivalent
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
C++ compiler for heterogeneous quantum-classical computing built on Clang and XACC
比做算法的懂工程落地,比做工程的懂算法模型。
Highly optimized inference engine for Binarized Neural Networks
Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.
To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."