Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to fail a job other than by raising an exception? #1313

Open
MW3000 opened this issue Jul 31, 2020 · 8 comments · May be fixed by #1484
Open

Is it possible to fail a job other than by raising an exception? #1313

MW3000 opened this issue Jul 31, 2020 · 8 comments · May be fixed by #1484
Labels

Comments

@MW3000
Copy link

MW3000 commented Jul 31, 2020

I heavily use the new Retrying Failed Jobs feature. (Thank you again 🎉)

At them moment I raise an exception, whenever I want a job to fail.
Is it possible to fail a job other than by raising an exception?

This would be interesting because it would often better communicate the intention of the code and also to keep the logs clean.

@selwin
Copy link
Collaborator

selwin commented Jul 31, 2020

At the moment raising an exception is the only way to fail a job. I'm open to ideas.

@MW3000
Copy link
Author

MW3000 commented Aug 4, 2020

I don't know much about the implementation of rq. As a user, I imagine something like:

from rq import get_current_job

def mytask():
    job = get_current_job()
    job.fail("That’s why I fail.")

@Angi2412
Copy link

I don't know if I should open a new issue, but I think it's strongly related to this one:

Is there a possibility to raise an exception from "outside" of the worker or job in order to cancel/stop an ongoing job?

@MW3000
Copy link
Author

MW3000 commented Aug 17, 2020

Is there a possibility to raise an exception from "outside" of the worker or job in order to cancel/stop an ongoing job?

This issue is about failing a job from “inside” the job. So I think the question of how to stop it from the outside merits its own issue. But perhaps you can reference this one.

@premchalmeti
Copy link

premchalmeti commented Aug 20, 2020

Please see this answer.

Also, I use a custom retry handler function implementation. The job of the retry handler is,

  • Gracefully handle the custom Exception (in my case it was ThirdPartyException class)
  • If there is a status.HTTP_429_TOO_MANY_REQUESTS, re-enqueue the failed job after the API Rate limit duration.

I think its worth having a look and think it might help in some way.

https://github.com/premkumar30/vendor-retailer/blob/b3d806569dc46e489bbf497f995dfc9492cfd492/vendor_retailer/utils.py#L44

@selwin
Copy link
Collaborator

selwin commented Aug 21, 2020

Starting from version 1.5.0, RQ allows you to configure retries.

queue.enqueue(my_func, retry=Retry(max=3, interval=60))

So if you want RQ to retry your function, simply raise an exception within your function.

@rpkak
Copy link
Contributor

rpkak commented May 24, 2021

I think there's no other solution than using exceptions because running code after failing the job doesn't make sense.

I can implement a function similar to #1313 (comment) that will raise an error with some more information than a simple error.

@rpkak rpkak linked a pull request Jun 2, 2021 that will close this issue
@selwin
Copy link
Collaborator

selwin commented Jun 12, 2021

So the good news is that the in development version of RQ actually solves a few issues discussed in this thread.

This issue is about failing a job from “inside” the job. So I think the question of how to stop it from the outside merits its own issue. But perhaps you can reference this one.

RQ 1.7.0 comes with send_stop_job_command that allows you to stop a currently executing job from the outside. https://python-rq.org/docs/jobs/#stopping-a-currently-executing-job

Also, I use a custom retry handler function implementation

RQ 1.8.0 (to be released) will include on_success and on_failure callbacks so you can decide what to do yourself whenever a job finishes or fails. The PR is https://github.com/rq/rq/pull/1480/files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants