Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Multiple tensors as index not yet supported #56

Open
zrx0311 opened this issue Dec 4, 2023 · 3 comments
Open

ValueError: Multiple tensors as index not yet supported #56

zrx0311 opened this issue Dec 4, 2023 · 3 comments

Comments

@zrx0311
Copy link

zrx0311 commented Dec 4, 2023

I installed web-stable-diffusion according to the official documentation, but it couldn't run. It seemed to be caused by changes in relax.

/web-stable-diffusion# python3 build.py --target cuda
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 17.38it/s]
/root/anaconda3/envs/mlc/lib/python3.9/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:181: FutureWarning: The configuration file of this scheduler: LCMScheduler {
"_class_name": "LCMScheduler",
"_diffusers_version": "0.24.0",
"beta_end": 0.012,
"beta_schedule": "scaled_linear",
"beta_start": 0.00085,
"clip_sample": false,
"clip_sample_range": 1.0,
"dynamic_thresholding_ratio": 0.995,
"num_train_timesteps": 1000,
"original_inference_steps": 50,
"prediction_type": "epsilon",
"rescale_betas_zero_snr": false,
"sample_max_value": 1.0,
"set_alpha_to_one": true,
"steps_offset": 0,
"thresholding": false,
"timestep_scaling": 10.0,
"timestep_spacing": "leading",
"trained_betas": null
}
is outdated. steps_offset should be set to 1 instead of 0. Please make sure to update the config accordingly as leaving steps_offset might led to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the scheduler/scheduler_config.json file
deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
Traceback (most recent call last):
File "/root/zrx/web-stable-diffusion/build.py", line 158, in
mod, params = trace_models(torch_dev_key)
File "/root/zrx/web-stable-diffusion/build.py", line 81, in trace_models
clip = trace.clip_to_text_embeddings(pipe)
File "/root/zrx/web-stable-diffusion/web_stable_diffusion/trace/model_trace.py", line 27, in clip_to_text_embeddings
mod = dynamo_capture_subgraphs(
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/tvm/relax/frontend/torch/dynamo.py", line 198, in dynamo_capture_subgraphs
compiled_model(*params, **kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
return fn(*args, **kwargs)
File "/root/zrx/web-stable-diffusion/web_stable_diffusion/trace/model_trace.py", line 20, in forward
text_embeddings = self.clip(text_input_ids)[0]
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 800, in forward
return self.text_model(
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 697, in forward
causal_attention_mask = _create_4d_causal_attention_mask(
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 641, in _convert_frame
result = inner_convert(frame, cache_size, hooks, frame_state)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
return fn(*args, **kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
return _compile(
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 569, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner
out_code = transform_code_object(code, transform)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
transformations(instructions, code_options)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 458, in transform
tracer.run()
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2069, in run
super().run()
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 719, in run
and self.step()
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 683, in step
getattr(self, inst.opname)(inst)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2157, in RETURN_VALUE
self.output.compile_subgraph(
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 857, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "/root/anaconda3/envs/mlc/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 957, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
r = func(*args, **kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/_dynamo/output_graph.py", line 1024, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/dynamo/output_graph.py", line 1009, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/torch/init.py", line 1607, in call
return self.compiler_fn(model
, inputs
, **self.kwargs)
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/tvm/relax/frontend/torch/dynamo.py", line 184, in capture
mod
= from_fx(
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/tvm/relax/frontend/torch/fx_translator.py", line 1635, in from_fx
return TorchFXImporter().from_fx(
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/tvm/relax/frontend/torch/fx_translator.py", line 1522, in from_fx
self.env[node] = self.convert_mapfunc_name
File "/root/anaconda3/envs/mlc/lib/python3.9/site-packages/tvm/relax/frontend/torch/fx_translator.py", line 1291, in _getitem
raise ValueError("Multiple tensors as index not yet supported")
torch._dynamo.exc.BackendCompilerFailed: backend='_capture' raised:
ValueError: Multiple tensors as index not yet supported

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True

[12:20:18] /workspace/tvm/src/relax/ir/block_builder.cc:64: Warning: BlockBuilder destroyed with remaining blocks!

@guoyaol
Copy link
Contributor

guoyaol commented Dec 4, 2023

we expect to solve this in recent update, will keep you updated

@zrx0311
Copy link
Author

zrx0311 commented Dec 5, 2023

we expect to solve this in recent update, will keep you updated

Thank you so much!

@brynbellomy
Copy link

I'm hitting this as well, after making several updates to the code which are helping to enable SDXL. Do you know how this can be fixed, or have any hints?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants