Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TEST FAILURE - com.hazelcast.map.BackupTest.testBackupMigrationAndRecovery2 #1201

Closed
mdogan opened this issue Nov 22, 2013 · 0 comments
Closed
Assignees
Labels
Source: Internal PR or issue was opened by an employee Type: Test-Failure
Milestone

Comments

@mdogan
Copy link
Contributor

mdogan commented Nov 22, 2013

testBackupMigrationAndRecovery2(com.hazelcast.map.BackupTest)  Time elapsed: 48.158 sec  <<< FAILURE!
java.lang.AssertionError: Backup size invalid, node-count: 3 expected:<100000> but was:<98639>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:555)
    at com.hazelcast.map.BackupTest.checkMapSizes(BackupTest.java:286)
    at com.hazelcast.map.BackupTest.testBackupMigrationAndRecovery(BackupTest.java:235)
    at com.hazelcast.map.BackupTest.testBackupMigrationAndRecovery2(BackupTest.java:213)
@ghost ghost assigned mdogan and ahmetmircik Nov 22, 2013
ahmetmircik added a commit to ahmetmircik/hazelcast that referenced this issue Nov 29, 2013
ahmetmircik added a commit to ahmetmircik/hazelcast that referenced this issue Nov 29, 2013
@mdogan mdogan closed this as completed Dec 2, 2013
@mmedenjak mmedenjak added the Source: Internal PR or issue was opened by an employee label Jan 28, 2020
devOpsHazelcast pushed a commit that referenced this issue Mar 27, 2024
Since Hazelcast IMDG 4.1 with the introduction of parallel migrations,
the "partition assignments version", a monotonically increasing version
number for the whole partition replica assignments data structure, was
replaced with a "partition state stamp" - a hash calculated over
individual partitions' version. On each partition replica assignment
update, the partition state stamp needs to be recalculated.

In certain cases, it is expected that all partition replica assignments
will be updated:
- during initial partitions assignment ("first arrangement")
- when a member that joins the cluster applies the partition replica
assignments, as received from master member
- when a member recovers partition replica assignments from persistence

In such cases, updating the partition state stamp on each partition
replica update is inefficient. Instead, the partition state stamp can be
updated just once, after the whole batch of partition replica
assignments has been applied.

Measured the following timings:
- Initial partitions assignment on a single member with 20K partitions
  - current `master` branch: 3870 millis (20001 partition stamp updates)
  - with this PR: 28 millis (1 partition stamp update)
- Apply initial partition state on the 3rd member joining a cluster with
2 members
   - current `master` branch: 3260 (40002 partition stamp updates)
   - with this PR: 4 millis (1 partition stamp update)

(cherry picked from commit 1030206
with an additional change to adapt test for JUnit 4 that is in use in
5.3 series)

Fixes [HZ-3652] on 5.3.7 branch

Backport of #25905

[HZ-3652]:
https://hazelcast.atlassian.net/browse/HZ-3652?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ

GitOrigin-RevId: 26505d9bab59a4dcf1a4a7a8ed36bac661e43b39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Source: Internal PR or issue was opened by an employee Type: Test-Failure
Projects
None yet
Development

No branches or pull requests

3 participants