-
Notifications
You must be signed in to change notification settings - Fork 10.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent result with --sparsification-and-bufferization and tensor.empty #92069
Comments
@llvm/issue-subscribers-mlir Author: anonymous (Anonymous15592)
Consider the following MLIR program:
a.mlir:
```
module {
func.func @tensor_i32(%arg0: tensor<1xi32>) -> i32 {
%idx0 = index.constant 0
%0 = tensor.extract %arg0[%idx0] : tensor<1xi32>
return %0 : i32
}
func.func @func1() {
%c1_i32 = arith.constant 1 : i32
%c0_i32 = arith.constant 0 : i32
%c0 = arith.constant 0 : index
%5 = tensor.empty() : tensor<1xi32> // using empty
// %5 = tensor.from_elements %c0_i32 : tensor<1xi32>
}
module {
module {
module {
}
|
Consider the following MLIR program:
a.mlir:
It will output two different results when applying two different optimization pass sequences:
pass sequence1
:--sparsification-and-bufferization --tensor-bufferize --func-bufferize --convert-func-to-llvm --convert-index-to-llvm --convert-vector-to-llvm --finalize-memref-to-llvm --convert-arith-to-llvm --reconcile-unrealized-casts
pass sequence2
:--tensor-bufferize --func-bufferize --convert-func-to-llvm --convert-index-to-llvm --convert-vector-to-llvm --finalize-memref-to-llvm --convert-arith-to-llvm --reconcile-unrealized-casts
The
pass sequence1
outputs the executable that outputs 1, while the latter outputs 0.The difference between
pass sequence1
andpass sequence2
is that there is an additional--sparsification-and-bufferization
at the begining of thepass sequence1
.I futher analyze the output of these two sequences:
pass1:
--sparsification-and-bufferization --tensor-bufferize
pass2:
--tensor-bufferize
The result of
pass1
is:The result of
pass2
is:It seems that
--sparsification-and-bufferization --tensor-bufferize
treats the operand and the result oftensor.insert
as same tensor(memref),when the operand of
tensor.insert
is created bytensor.empty
.If I replace the
tensor.empty
withtensor.from_element
, or just wrap thetensor.empty
with a function. The modified MLIR program will output the same result.The modified program:
I wonder if there is some thing wrong with
--sparsification-and-bufferization
andtensor.empty
.This result inconsistency may not be a problem because
tensor.empty
should only contains the shpae information.git version: 2163ae7
The text was updated successfully, but these errors were encountered: