Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: dont log "couldn't connect to miner" in 2.5 #4768

Closed
wants to merge 1 commit into from

Conversation

hstove
Copy link
Contributor

@hstove hstove commented May 8, 2024

When the signer looks up the current coordinator during an active reward cycle, it assumes the miner should be the coordinator. In 2.5, this isn't the case. When DKG doesn't complete before the active reward cycle, the signer continues to lookup the coordinator, but it still outputs this error log.

TLDR: the signer's coordinator logic is fine, as it already falls back to the DKG coordinator, but it shouldn't log this error in 2.5.

To implement this, I've extended the RewardCycleInfo struct to include the current epoch, and I pass around this struct instead of just current_reward_cycle in a few places.

@hstove hstove requested review from jferrant and obycode May 8, 2024 18:02
@hstove hstove linked an issue May 8, 2024 that may be closed by this pull request
@hstove hstove changed the title wip: dont log "couldn't connect to miner" in 2.5 fix: dont log "couldn't connect to miner" in 2.5 May 8, 2024
Copy link
Contributor

@obycode obycode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@@ -427,7 +430,7 @@ impl SignerRunLoop<Vec<OperationResult>, RunLoopCommand> for RunLoop {
&self.stacks_client,
event.as_ref(),
res.clone(),
current_reward_cycle,
&current_reward_cycle_info,
Copy link
Collaborator

@jferrant jferrant May 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't think this needs to be by reference here as you already do as_ref() above.

@@ -450,7 +453,7 @@ impl SignerRunLoop<Vec<OperationResult>, RunLoopCommand> for RunLoop {
}
}
// After processing event, run the next command for each signer
signer.process_next_command(&self.stacks_client, current_reward_cycle);
signer.process_next_command(&self.stacks_client, &current_reward_cycle_info);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similarly here. Don't think you need the reference.

@@ -516,7 +522,7 @@ impl Signer {
// We cannot execute a DKG command if we are not the coordinator
Some(self.get_coordinator_dkg().0)
} else {
self.get_coordinator_sign(current_reward_cycle).0
self.get_coordinator_sign(&current_reward_cycle_info).0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simiarly don't htink you need the reference here.

@@ -653,13 +659,18 @@ impl Signer {
let coordinator_pubkey = if Self::is_dkg_message(&packet.msg) {
self.get_coordinator_dkg().1
} else {
self.get_coordinator_sign(current_reward_cycle).1
self.get_coordinator_sign(&current_reward_cycle_info).1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

@jferrant
Copy link
Collaborator

jferrant commented May 9, 2024

A bit unrelated to this PR, but it would be nice if you updated the internal call to get_node_epoch in process_dkg to go away (this was done in a sep branch that also added epoch to reward cycle info, but that branch is dead). Just need to pass the reward cycle info down to it. Its not a blocker for this PR, so if you don't want to do it here, will make a ticket and do on top of this.

However, would you mind fixing the references I mentioned above? Recommend running clippy on stacks-signer every time you open a PR cargo clippy -p stacks-signer --no-deps --tests --all-features

@hstove
Copy link
Contributor Author

hstove commented May 16, 2024

I think this should be closed given the v0/v1 changes, yeah? If not, I think it'd be best to rebase on top of #4778 . @jferrant what do you think?

@jferrant
Copy link
Collaborator

I think this should be closed given the v0/v1 changes, yeah? If not, I think it'd be best to rebase on top of #4778 . @jferrant what do you think?

Haha I am useless. yes that made sense. But now you can rebase on top of develop as v0 is merged!

@hstove
Copy link
Contributor Author

hstove commented May 23, 2024

Closing this - no longer relevant

@hstove hstove closed this May 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Remove signer log when DKG doesn't succeed
3 participants