-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: tuple index out of range when calling LLM with prompt keyword arg #13056
Comments
Hey there, @erictse! 👋 I'm Dosu, a bot crafted to assist with squashing bugs, answering queries, and guiding you towards becoming a contributor in the LlamaIndex universe. While we await the arrival of a human maintainer, I'm here to lend a robotic hand. I've caught your issue and am currently analyzing the circuits to provide you with a detailed response shortly. Stay tuned! |
Yea it's technically not supported/advised. It might not be mentioned in the docs, but also, I don't think there's any example that uses it as a kwarg either 😅 |
It appears that the To avoid this error, you should call the output = llm.complete(prompt) If you need to pass additional options, they should be included as keyword arguments after the output = llm.complete(prompt, formatted=False, additional_option="value") This approach ensures compatibility with the current method signature and avoids the error you've encountered.
|
Bug Description
This succeeds:
but see how it fails in Steps to Reproduce below. I could be wrong if providing
prompt
as a keyword arg is unsupported, but I didn't see any documentation or comments about that.Version
0.10.30
Steps to Reproduce
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: