Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Topology recovery retry fixes for auto-delete queues #692

Merged
merged 1 commit into from Jul 3, 2021

Conversation

vikinghawk
Copy link
Contributor

Proposed Changes

Fixes issues with the topology recovery retry logic around auto-delete queues.

Types of Changes

What types of changes does your code introduce to this project?
Put an x in the boxes that apply

  • Bugfix (non-breaking change which fixes issue #NNNN)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation (correction or otherwise)
  • Cosmetics (whitespace, appearance)

Checklist

Put an x in the boxes that apply. You can also fill these out after creating
the PR. If you're unsure about any of them, don't hesitate to ask on the
mailing list. We're here to help! This is simply a reminder of what we are
going to look for before merging your code.

  • I have read the CONTRIBUTING.md document
  • I have signed the CA (see https://cla.pivotal.io/sign/rabbitmq)
  • All tests pass locally with my changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have added necessary documentation (if appropriate)
  • Any dependent changes have been merged and published in related repositories

Further Comments

If this is a relatively large or complex change, kick off the discussion by
explaining why you chose the solution you did and what alternatives you
considered, etc.

@@ -62,7 +65,7 @@
if (context.entity() instanceof RecordedQueue) {
final RecordedQueue recordedQueue = context.queue();
AutorecoveringConnection connection = context.connection();
connection.recoverQueue(recordedQueue.getName(), recordedQueue, false);
connection.recoverQueue(recordedQueue.getName(), recordedQueue);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before: if this failed to recover the queue, the error was delivered to the exception handler and swalled from the perspective of this retry logic. So the queue never got recovered.

Now: this recoverQueue method will now throw an exception if an error occurs and the retry logic will catch that and continue to retry as long as the max retry attempts haven't been reached.

@@ -165,14 +164,52 @@
} else if (consumer.getChannel() == channel) {
final RetryContext retryContext = new RetryContext(consumer, context.exception(), context.connection());
RECOVER_CONSUMER_QUEUE.call(retryContext);
context.connection().recoverConsumer(consumer.getConsumerTag(), consumer, false);
context.connection().recoverConsumer(consumer.getConsumerTag(), consumer);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar concept here as with the recoverQueue... we want this to throw an exception rather than deliver to the exceptionhandler and swallow the exception

@michaelklishin
Copy link
Member

@vikinghawk have you created this branch off of main? It does not rebase cleanly on top of 5.x.x and the conflicts are surprising: this branch says the version is 6.x.

@michaelklishin
Copy link
Member

Disregard my earlier comment, I re-created 5.x.x in this local clone and rebasing succeeds now.

@michaelklishin michaelklishin merged commit e88962d into rabbitmq:5.x.x-stable Jul 3, 2021
@michaelklishin michaelklishin added this to the 5.13.0 milestone Jul 3, 2021
@michaelklishin
Copy link
Member

Thank you!

@michaelklishin
Copy link
Member

@vikinghawk can you please double check if I have merged both PRs correctly? There were conflicts in several files.

@michaelklishin
Copy link
Member

@vikinghawk also, can you please submit this PR against main?

@michaelklishin
Copy link
Member

Actually no need to do that, I managed to merge 5.x.x-stable into main.

@vikinghawk
Copy link
Contributor Author

LGTM. Thanks!

@acogoluegnes
Copy link
Contributor

BTW, 5.13.0.RC2 has been released if you want to try it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants