You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem
Our write-behind queue is growing very large because it is not processed efficiently. We are trying to use batches of size 1000, but we see that most batches are smaller than 10 items.
Expected behavior
Since we are using coalescing write-behind queues - and therefore only the latest version of a map entry needs to be persisted, we would expect that the batches are 1000 items in size if the write-behind queue is growing large.
To Reproduce
To reproduce the issue, you will need a workload that mixes store and delete operations. We use around 80% store and 20% delete operations.
Configure a map which has a coalescing write-behind map store configured and uses batches of size 1000.
Execute the workload and look at the MapStore.storeAll() and MapStore.deleteAll() callbacks.
I already opened a PR which contains a test that shows the problem: #24672
Additional context
In the above mentioned PR I already implemented a fix which seems easy enough - please check, approve, and merge :-)
The text was updated successfully, but these errors were encountered:
This PR improves the batching in the write-behind processor when used
with coalescing write-behind queues.
Without this PR and having many store and delete operations mixed in the
write-behind queue, you will end up with many small callbacks to your
MapStore implementation. This is bad for performance, because it pretty
much disables the advantages of batching.
With this PR and using write coalescing, when having many store and
delete operations mixed in the write-behind queue, you will end up with
just two callbacks to your MapStore implementation - one for all the
store operations and one for all the delete operations.
Fixes#24763
This PR improves the batching in the write-behind processor when used with coalescing write-behind queues.
Without this PR and having many store and delete operations mixed in the write-behind queue, you will end up with many small callbacks to your MapStore implementation. This is bad for performance, because it pretty much disables the advantages of batching.
With this PR and using write coalescing, when having many store and delete operations mixed in the write-behind queue, you will end up with just two callbacks to your MapStore implementation - one for all the store operations and one for all the delete operations.
Fixes#24763 on 5.3.z
Backport of #24672
Problem
Our write-behind queue is growing very large because it is not processed efficiently. We are trying to use batches of size 1000, but we see that most batches are smaller than 10 items.
Expected behavior
Since we are using coalescing write-behind queues - and therefore only the latest version of a map entry needs to be persisted, we would expect that the batches are 1000 items in size if the write-behind queue is growing large.
To Reproduce
I already opened a PR which contains a test that shows the problem:
#24672
Additional context
In the above mentioned PR I already implemented a fix which seems easy enough - please check, approve, and merge :-)
The text was updated successfully, but these errors were encountered: