Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Producer should be retry put message if OS_PAGE_CACHE_BUSY #8053

Open
3 tasks done
biningo opened this issue Apr 23, 2024 · 13 comments · May be fixed by #8054
Open
3 tasks done

[Bug] Producer should be retry put message if OS_PAGE_CACHE_BUSY #8053

biningo opened this issue Apr 23, 2024 · 13 comments · May be fixed by #8054

Comments

@biningo
Copy link

biningo commented Apr 23, 2024

Before Creating the Bug Report

  • I found a bug, not just asking a question, which should be created in GitHub Discussions.

  • I have searched the GitHub Issues and GitHub Discussions of this repository and believe that this is not a duplicate.

  • I have confirmed that this bug belongs to the current repository, not other repositories of RocketMQ.

Runtime platform environment

Ubuntu 20.04

RocketMQ version

develop

JDK Version

JDK 8

Describe the Bug

If the broker is overloaded or disk IO is busy, it will clean up some expired requests and return with SYSTEM_BUSY, at which point the producer will not retry to put the message into the broker. So I think PutMessageStatus is OS_PAGE_CACHE_BUSY should also return SYSTEM_BUSY.

reference: https://github.com/apache/rocketmq/blob/develop/broker/src/main/java/org/apache/rocketmq/broker/processor/SendMessageProcessor.java#L432

Steps to Reproduce

  1. mock PutMessageStatus=OS_PAGE_CACHE_BUSY
  2. put the message into broker
  3. observe the number of produce retries

What Did You Expect to See?

The producer does not retry putting the message when the PutMessageStatus is OS_PAGE_CACHE_BUSY.

What Did You See Instead?

The producer retries putting the message when the PutMessageStatus is OS_PAGE_CACHE_BUSY.

Additional Context

No response

@frinda
Copy link
Contributor

frinda commented Apr 24, 2024

Why can't you try again?
When an MQ is busy, you can retry to send messages to other MQs.

@biningo
Copy link
Author

biningo commented Apr 24, 2024

Why can't you try again? When an MQ is busy, you can retry to send messages to other MQs.

Refer to the discussion below:
#5838 #1196 #2713

SYSTEM_BUSY should not retry.

@biningo
Copy link
Author

biningo commented Apr 24, 2024

The community is very divided on whether SYSTEM_BUSY should be retried. But there is no retry in the latest version.

@RongtongJin
Copy link
Contributor

From another perspective, semantically returning busy is indeed more reasonable than error.

@humkum
Copy link
Contributor

humkum commented Apr 24, 2024

Generally speaking, page cache busy will only cause process write operations to be suspended for a few seconds. Retrying here will not increase the pressure on page cache synchronization reclaim, and there is no problem in retrying. But semantically speaking, it should indeed belong to SYSTEM BUSY. But this comes back to the original question. Should SYSTEM BUSY be retried? This change will cause messages that were successfully sent before because the PC was busy to be unsuccessful now.

@yuz10
Copy link
Member

yuz10 commented Apr 24, 2024

Maybe we can retry SYSTEM_BUSY by default. currently need to call producer.addRetryResponseCode(ResponseCode.SYSTEM_BUSY); to add to retry list

@biningo
Copy link
Author

biningo commented Apr 24, 2024

In my opinion:

  1. If the MQ is a single broker node:retry is unreasonable.
  2. If the MQ has multiple broker nodes:retry is reasonable.

And it's not possible to have only one broker in a production environment.

@humkum
Copy link
Contributor

humkum commented Apr 24, 2024

In my opinion:

  1. If the MQ is a single broker node:retry is unreasonable.
  2. If the MQ has multiple broker nodes:retry is reasonable.

And it's not possible to have only one broker in a production environment.

agree

@biningo
Copy link
Author

biningo commented Apr 24, 2024

@RongtongJin So what do you think? I think the producer can retry SYSTEM_BUSY by default. If the community agrees, I can start a new PR to fix this issue.

@cserwen
Copy link
Member

cserwen commented Apr 28, 2024

In my opinion:

  1. If the MQ is a single broker node:retry is unreasonable.
  2. If the MQ has multiple broker nodes:retry is reasonable.

And it's not possible to have only one broker in a production environment.

We have discussed this issue countless times (like: #5838 #2726 #1196... ), and I think it is time to come to a definite conclusion. There is no doubt that we should support retries in system_busy.

Maybe we can start an email to vote for it. @RongtongJin @yuz10 @biningo

@wz2cool
Copy link
Contributor

wz2cool commented Apr 29, 2024

In my opinion:

  1. If the MQ is a single broker node:retry is unreasonable.
  2. If the MQ has multiple broker nodes:retry is reasonable.

And it's not possible to have only one broker in a production environment.

We have discussed this issue countless times (like: #5838 #2726 #1196... ), and I think it is time to come to a definite conclusion. There is no doubt that we should support retries in system_busy.

Maybe we can start an email to vote for it. @RongtongJin @yuz10 @biningo

agree.

@wz2cool
Copy link
Contributor

wz2cool commented Apr 29, 2024

Maybe we can retry SYSTEM_BUSY by default. currently need to call producer.addRetryResponseCode(ResponseCode.SYSTEM_BUSY); to add to retry list

@biningo Currently, you can try this to solve your problem.

@biningo biningo changed the title [Bug] Producer should not be retry put message if OS_PAGE_CACHE_BUSY [Bug] Producer should be retry put message if OS_PAGE_CACHE_BUSY May 11, 2024
@biningo
Copy link
Author

biningo commented May 11, 2024

#5845 already merged. So I changed the issue title.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants