Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inductor error: AttributeError: 'View' object has no attribute 'freeze_layout' #126029

Open
ezyang opened this issue May 12, 2024 · 1 comment
Open
Assignees
Labels
module: inductor oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@ezyang
Copy link
Contributor

ezyang commented May 12, 2024

馃悰 Describe the bug

There is a relatively simple internal repro, but it cannot be shared here.

Internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1427322221239318/

The backtrace is:

File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/_inductor/graph.py:730, in GraphLowering.run(self, *args)
    728 @dynamo_timed
    729 def run(self, *args):
--> 730     return super().run(*args)
File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/fx/interpreter.py:145, in Interpreter.run(self, initial_env, enable_io_processing, *args)
    142     continue
    144 try:
--> 145     self.env[node] = self.run_node(node)
    146 except Exception as e:
    147     if self.extra_traceback:
File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/_inductor/graph.py:1188, in GraphLowering.run_node(self, n)
   1186     debug("layout_constraints")
   1187     args, kwargs = layout_constraints[n.target](n, *args, **kwargs)  # type: ignore[index]
-> 1188     result = self.call_function(n.target, args, kwargs)
   1189 elif is_magic_method(n.target):
   1190     # TODO: this is sus, it probably should be handled in the
   1191     # lowerings themselves similarly to sym_size/sym-stride
   1192     debug("is_magic_method")
File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/_inductor/graph.py:977, in GraphLowering.call_function(self, target, args, kwargs)
    975     return out
    976 except Exception as e:
--> 977     raise LoweringException(e, target, args, kwargs).with_traceback(
    978         e.__traceback__
    979     ) from None
File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/_inductor/graph.py:974, in GraphLowering.call_function(self, target, args, kwargs)
    972 try:
    973     log.debug("  via %s", lowerings[target])
--> 974     out = lowerings[target](*args, **kwargs)
    975     return out
    976 except Exception as e:
File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/_inductor/lowering.py:304, in _register_lowering.<locals>.wrapped(*args, **kwargs)
    301 if unpacked:
    302     args = [args]
--> 304 out = decomp_fn(*args, **kwargs)
    305 validate_ir(out)
    307 return out
File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/_inductor/kernel/conv.py:367, in convolution(x, weight, bias, stride, padding, dilation, transposed, output_padding, groups)
    354 autotuning_gemm = config.max_autotune or config.max_autotune_gemm
    356 if (
    357     (config.conv_1x1_as_mm or (autotuning_gemm and channels_last_conv()))
    358     and is_ones(kernel_shape)
   (...)
    365     and sympy_product(x.get_size()) > 0
    366 ):
--> 367     return convert_1x1_conv_to_mm(x, weight, bias)
    369 if bias is not None and ir.get_device_type(x) != "cpu":
    370     # peel off the bias, cudnn is slower with it
    371     result = convolution(x, weight, None, **kwargs)
File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/_inductor/kernel/conv.py:283, in convert_1x1_conv_to_mm(x, weight, bias)
    281 else:
    282     x.realize()
--> 283     x.freeze_layout()
    285 x_permute = list(range(rank))
    286 x_permute.append(x_permute.pop(1))
File /mnt/xarfuse/uid-27352/ecbef30d-seed-nspid4026534542_cgpid211001756-ns-4026534531/torch/_inductor/ir.py:7185, in MutableBox.__getattr__(self, name)
   7184 def __getattr__(self, name):
-> 7185     fn = getattr(self.data, name)
   7186     if callable(fn):
   7187         return fn
BackendCompilerFailed: backend='inductor' raised:
LoweringException: AttributeError: 'View' object has no attribute 'freeze_layout'
  target: aten.convolution.default
  args[0]: TensorBox(
    View(
      StorageBox(
        ComputedBuffer(name='buf18', layout=FlexibleLayout('cuda', torch.float32, size=[1, 256, 47, 2, 36, 2], stride=[1732608, 6768, 144, 72, 2, 1]), data=Pointwise(
          'cuda',
          torch.float32,
          def inner_fn(index):
              _, i1, i2, i3, i4, i5 = index
              tmp0 = ops.load(buf17, i5 + 2 * i3 + 4 * i1 + 1024 * i4 + 36864 * i2)
              return tmp0
          ,
          ranges=[1, 256, 47, 2, 36, 2],
          origin_node=clone,
          origins={clone}
        ))
      ),
      size=[1, 256, 94, 72],
      reindex=lambda i0, i1, i2, i3: [0, i1, ModularIndexing(i2, 2, 47), ModularIndexing(i2, 1, 2), ModularIndexing(i3, 2, 36), ModularIndexing(i3, 1, 2)],
      origins={clone, view_9}
    )
  )
  args[1]: TensorBox(StorageBox(
    InputBuffer(name='arg10_1', layout=FixedLayout('cuda', torch.float32, size=[1024, 256, 1, 1], stride=[256, 1, 1, 1]))
  ))
  args[2]: None
  args[3]: [1, 1]
  args[4]: [0, 0]
  args[5]: [1, 1]
  args[6]: False
  args[7]: [0, 0]
  args[8]: 1

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

Maybe you can figure it out solely from the backtrace.

cc @gchanan @zou3519 @kadeng @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @jansel, also @XiaobingSuper @jiayisunx (others who have recently interacted with freeze_layout)

Versions

main

@eellison
Copy link
Contributor

This didn't repro for me.

@bdhirsh bdhirsh added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed triage review labels May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: inductor oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants