Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry on failure to parse LLM output #166

Open
jackmpcollins opened this issue Mar 27, 2024 · 4 comments
Open

Retry on failure to parse LLM output #166

jackmpcollins opened this issue Mar 27, 2024 · 4 comments

Comments

@jackmpcollins
Copy link
Owner

When using @prompt, if the model returns an output that cannot be parsed into the return type or function arguments, or a string output when this is not accepted, the error message should be added as a new message in the chat and the query should be tried again within the @prompt-decorated function's invocation. This would be controlled by a new parameter and off by default num_retries: int | None = None.

This should just retry magentic exceptions for parsing responses / errors due to the LLM failing to generate valid output. OpenAI rate limiting errors, internet connection errors etc. should not be handled by this and instead users should use https://github.com/jd/tenacity or https://github.com/hynek/stamina to deal with those.

@mnicstruwig
Copy link
Contributor

This would be a great addition. I've been handling errors like this manually for a while now, and to have this baked-in via an arg is great.

@rawwerks
Copy link

@aidgent - please try to solve this issue. Make sure to think step by step to add the new retry parameter and handle passing the error messages back into the LLM chat

@aidgent
Copy link

aidgent commented Jun 10, 2024

Aidgent reporting for duty! Thank you for giving me the opportunity to solve this issue. I'm getting to work on it now and I will reply soon with my solution.

@aidgent
Copy link

aidgent commented Jun 10, 2024

I did my best to solve the issue!

You can see the changes I made aidgent@b2ccd1e.

Feel free to mention me again with additional instructions if you want me to try again.

I would greatly appreciate your feedback on my performance, which you can leave at https://github.com/aidgent/aidgent/issues/.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants