We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Current support: https://github.com/llvm/torch-mlir/blob/main/lib/Conversion/TorchOnnxToTorch/Utils.cpp#L64
According to onnx dtype and torch dtype, missing support:
The text was updated successfully, but these errors were encountered:
https://github.com/pytorch/pytorch/blob/main/c10/core/ScalarType.h#L55
Sorry, something went wrong.
https://github.com/shouxieai/tensorRT_Pro/blob/main/onnx/onnx-ml.proto#L487
checkpoint https://github.com/jinchen62/torch-mlir/tree/dtype_support https://github.com/jinchen62/llvm-project/tree/dtype_support
added float8 types to the op def, it could generates %78 = "llvm.fptrunc"(%76) : (vector<4xf32>) -> vector<4xf8E5M2>, but it fails on https://github.com/llvm/llvm-project/blob/main/mlir/lib/Target/LLVMIR/TypeToLLVM.cpp#L79, and seems there is no support for float8 here https://github.com/llvm/llvm-project/blob/main/llvm/lib/IR/Type.cpp#L234
%78 = "llvm.fptrunc"(%76) : (vector<4xf32>) -> vector<4xf8E5M2>
No branches or pull requests
Current support: https://github.com/llvm/torch-mlir/blob/main/lib/Conversion/TorchOnnxToTorch/Utils.cpp#L64
According to onnx dtype and torch dtype, missing support:
The text was updated successfully, but these errors were encountered: