-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
4.0.0 rc1 changi: node suddenly invalidates old block on fresh sync #2598
Comments
@sieniven it seems that the tx really is invalid. currently syncing and I get hash d60b62ee74a38e155b638df2073ea0d6e2844a13a5d467a57d65f335874307cd for block 1645507 the invalid tx seems to be in block 76bcc0329a449c50a23b542e4ee49c0217ea3537968a009121e371839ac72540 (also height 1645507) which is the beta14 chain I believe. So I don't get the invalid block on my disk and can't vmap the txs. so it seems that the "is invalid" error is correct, question is why its reverting... will update when I have new info. |
Checked the logs. this is the block from beta14 which got invalid in beta15. |
Attaching one my fresh sync logs: debug.log And pulling just the heights: heights.txt Ref:
|
@kuegi any chance you have specific peers added that's aligning your node a different chain perhaps? |
@prasannavl your sync looks fine yes. I have no added peers, but since the node was running through all the betas it has a long list. And during sync I often I am still syncing, but if the explicit invalidation solves it, then I think the issue is that the node does not realize the chain to be invalid from the headers and is therefore likely to try to jump over just to realize that it was actually an invalid fork. Will report when sync is finished. |
Thanks @kuegi. That seems like the possible theory. The team is also hunting down another issue related to invalid blocks and indexes. Could be related.
When you have more data, please do add more on this if you have. Thanks. |
FYI: with the explicit invalidate, the sync worked nicely now. Will try to do some testing regarding the "mess up of state" on a big rollback. |
@prasannavl reproducible:
looks like the rollback is messing up the nonce data in the state |
I tried to reproduce it with other filled blocks, but failed. so its not a general issue with filled blocks or lots of TDs. |
@prasannavl @sieniven : I didn't manage to reproduce it with new blocks, but I found the block where it "breaks" right now: |
Summary
The node suddenly starts to rollback 5000 blocks and then stops with now invalid tx. so far it happened 3 times: once during normal operation and then everytime I try a fresh sync.
Node syncs until different block height (during normal run it was 1654025 , freshsync 1: 1654683 , freshsync 2: 1651171)
then shows those messages ("heights" and file number differs between the cases)
Then it rolls back to 1645506 always showing this message:
Steps to Reproduce
Environment
[Please fill all of the following or NA if not applicable]
I am now syncing to an earlier block and will do a local snapshot for easier reproducability
The text was updated successfully, but these errors were encountered: