New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes early notifications after config reload (#2492) #3835
base: main
Are you sure you want to change the base?
Conversation
I was trying to find an answer of why we set hasFlushed before we actually started flushing (leave alone finished it). The flag was introduced in (#1301) to mitigate duplicate notifications by avoiding to execute With this change the problem is |
dispatch/dispatch.go
Outdated
ctx = notify.WithGroupKey(ctx, ag.GroupKey()) | ||
ctx = notify.WithGroupLabels(ctx, ag.labels) | ||
ctx = notify.WithReceiverName(ctx, ag.opts.Receiver) | ||
ctx = notify.WithRepeatInterval(ctx, ag.opts.RepeatInterval) | ||
ctx = notify.WithMuteTimeIntervals(ctx, ag.opts.MuteTimeIntervals) | ||
ctx = notify.WithActiveTimeIntervals(ctx, ag.opts.ActiveTimeIntervals) | ||
|
||
ag.flush(func(alerts ...*types.Alert) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't had time to look at the other changes, but I think there is a subtle bug here. Prior to this change, the timer was reset before the flush, so Group Interval is unaffected by the duration of the notify function. However, now the timer is reset after the flush. This means Group Interval is reset relative to the duration of the notify function instead of relative to the time of the previous flush. Won't this cause Group interval to drift over time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I argue s/bug/trade-off/. It picks "success" (or timeout) as the point of last group notification. It might even match user experience better. It counts from the time I got paged (over the time it started to try to page me). On top of that we have no test case that tries to capture the drifting behavior.
In terms of drift we also have the small race for ag.next.Reset(0)
and configuration reload (that ends up creating a whole new Dispatcher). The drift is larger in case of errors (where might end up skipping a notification anyway).
The alternative to the above is that we start tracking both the success and start time in the nflogpb.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that it breaks Alertmanager HA which relies on the Group wait and Group interval timers being in lock-step on all replicas for the same aggregation group. If those timers drift, then de-duplication in dedupStage
will stop working.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I am missing but to my understanding the only communication between the replicas in the HA set-up is the nflog? In specific the dedup stage today relies on return entry.Timestamp.Before(n.now().Add(-repeat))
if no alerts changed and config was reloaded.
Today config reload is probably one of the likeliest places the group interval can drift. We re-create the dispatcher and the pipeline can run earlier than the group interval. And that is kind of the bug we see during config reload. The pipeline runs earlier and sees unrelated changes makes a notification, makes a new notification.
What do you think about recording the start of the notification and then using it to set the timer accordingly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I am missing but to my understanding the only communication between the replicas in the HA set-up is the nflog? In specific the dedup stage today relies on
return entry.Timestamp.Before(n.now().Add(-repeat))
if no alerts changed and config was reloaded.
That's correct!
However, there is also an implicit assumption in Alertmanager that the Group interval for each aggregration group is synchronized across all Alertmanagers in an HA set-up for dedup to work. It looks like this:
You can see here that AM2 and AM3 wait peer_timeout
and 2*peer_timeout
seconds in the wait stage. This is how Alertmanager failover works in case AM1 or AM2 cannot send a notification. However, this behavior relies on Group wait and Group interval timers firing at the same time. Alertmanagers do not communicate with each other to synchronize these timers, but instead use the time an alert was received from Prometheus as Prometheus broadcasts each alert to all Alertmanagers.
However, if the timers become de-synchronized then the wait stage stops working and each Alertmanager could enter the dedup and retry stages at the same time, causing duplicate notifications.
There are a number of issues with this, but the important issue here is that config reloads can cause the timers to be de-synchronized if the config is reloaded at different times. I think this is where the fix needs to be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point on synchronizing config reload time across cluster members. I think this is an additional problem as we can see that a config reload triggers an ag.flush
before the expiry of the group interval. The acceptance test that I added shows the issue with a single instance.
I have revised the patch to carry the time of the ag.flush
and use it to "mute" early notifications. If we want to we can build on it and use it to synchronize the timer across the instances and config reload.
Something like this:
- Config reload
- New Dispatcher, new aggregation groups
- ag.next.Reset(0)
- Flush running earlier than group_interval
- Dedup stage "swallowing this" and returning the desired time..
- dispatcher using the return value to set ag.next accordingly...
I find preferable over reading the nflog in Dispatcher directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a think about this and I think we need to restore the timer. If we don't then, in the worst case, this patch will delay notifications for up to 2 * group_interval
. It's not unusual for users to set large group intervals (several hours). For example, consider the following configuration:
receiver: test
route:
receiver: test
group_wait: 15s
group_interval: 5m
repeat_interval: 4h
- An alert is received at time
t
. The Group wait timer is started. - The Group wait timer expires at
t + 15s
, a flush occurs and Alertmanager sends a notification. The next Group interval will occur att + 5m15s
. - A second alert is received at time
t + 30s
. - Alertmanager is reloaded at time
t + 5m10s
. A flush occurs, however it is ignored because it is 5 seconds before the expected Group interval. The next Group interval will occur att + 10m15s
. - At time
t + 10m15s
, the Group interval timer expires. Another flush occurs, and this time a notification is sent.
In this example a notification should have been sent 5m after the first notification, but instead was sent 10 minutes later at 10m15s. Instead of sending notifications too soon Alertmanager is now sending notifications too late?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The above can happen. We could run flush more frequently and the downside is more CPU and the notification would still be off.
What makes this a bit hard is that the dispatcher doesn't have access to the nflogpb.Entry and that the entry is per receiver.
Given config reload, restart of alertmanager instances I can see two approaches.
- We extend
Stage.Exec
to return the time the pipeline should run the next time. The FanoutState/MultiStage can then return the closest non-zero timestamp. - Extend the Stage interface to have a function that returns the next run time. We can either synchronize once (like we do this today) or continously.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opted for the 2nd option. One of the acceptance tests required updates now that we "restore" the timer. I also changed that the repeat interval is based on dispatch time and not when the notification succeeded.
I plan on adding a few more tests but would appreciate some early comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a test for the following configuration:
route:
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
Where the reload happens 10 mins after the notification is sent. I think in this case the code will do an immediate flush due to Line 455 of dispatch.go
when I assume it should restore the timer instead?
The issue is that nflog is not updated each time the aggregation group is flushed, just when a notification is sent.
This is where I'm not sure storing the flush timestamp in the nflog makes sense because the nflog is specific to notification deduplication.
What I'm thinking instead is can we get the existing timers from the old aggregation groups at reload time and passing it to the new groups? Groups which have had their time intervals changed will need to start again from Group wait.
e1464ba
to
e4c97e3
Compare
Add an acceptance test that triggers a config reload and verifies that no early notification is occurring. One of the challenges is which time to use to check for a previous notification. The nflog captures about the time all notifications were sent. That conflicts with the ag.next timer that get's reset before the ag is being flushed. Delays and retries can make these two point in time be different enough for the integration tests to fail. I considered the following ways to fix it: 1.) Modify the nflog.proto to capture the flush time in addition to the successful notification time. 2.) In addition to hasFlushed capture the last flush time and pass it to the context like we do for Now. 3.) Set hashFlushed and reset the timer after the flush has been done. I started with prometheus#3 as it seemeded to have the fewest downsides with things like drift. Based on comments this is no prometheus#1. needsUpdate is based on: prometheus#3074 (comment) Signed-off-by: Holger Hans Peter Freyther <holger@freyther.de>
e4c97e3
to
91dfd29
Compare
When creating an AggregationGroup consult the Stage about the next expected time to `Exec` and use it to set the initial timer. Update the acceptance test to match the new behavior and add a test to receive the updated group at group_interval timeout. Signed-off-by: Holger Hans Peter Freyther <holger@freyther.de>
Use the dispatch time instead of the last successful notification for the repeat interval. This avoids a drift and also keeps the same time during notifications. Signed-off-by: Holger Hans Peter Freyther <holger@freyther.de>
91dfd29
to
c54e386
Compare
Add an acceptance test that triggers a config reload and verifies that no early notification is occurring.
One of the challenges is which time to use to check for a previous notification. The nflog captures about the time all notifications were sent. That conflicts with the ag.next timer that get's reset before the ag is being flushed. Delays and retries can make these two point in time be different enough for the integration tests to fail.
I considered the following ways to fix it:
1.) Modify the nflog.proto to capture the flush time in addition
to the successful notification time.
2.) In addition to hasFlushed capture the last flush time and pass
it to the context like we do for Now.
3.) Set hashFlushed and reset the timer after the flush has been
done.
I started with #3 as it seemeded to have the fewest downsides with things like drift.
needsUpdate is based on:
#3074 (comment)