Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes early notifications after config reload (#2492) #3835

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

zecke
Copy link
Contributor

@zecke zecke commented May 5, 2024

Add an acceptance test that triggers a config reload and verifies that no early notification is occurring.

One of the challenges is which time to use to check for a previous notification. The nflog captures about the time all notifications were sent. That conflicts with the ag.next timer that get's reset before the ag is being flushed. Delays and retries can make these two point in time be different enough for the integration tests to fail.

I considered the following ways to fix it:

1.) Modify the nflog.proto to capture the flush time in addition
to the successful notification time.
2.) In addition to hasFlushed capture the last flush time and pass
it to the context like we do for Now.
3.) Set hashFlushed and reset the timer after the flush has been
done.

I started with #3 as it seemeded to have the fewest downsides with things like drift.

needsUpdate is based on:
#3074 (comment)

@zecke
Copy link
Contributor Author

zecke commented May 5, 2024

I was trying to find an answer of why we set hasFlushed before we actually started flushing (leave alone finished it).

The flag was introduced in (#1301) to mitigate duplicate notifications by avoiding to execute ag.run() more frequently.

With this change the problem is solved inside the DedupStage. A ag.next.Reset(0) will flush the group early but as it happens before the groupInterval should not lead to extra notifications. The only visible impact should be jitter. If this matters we can replace the timer reset with a write into a channel.

ctx = notify.WithGroupKey(ctx, ag.GroupKey())
ctx = notify.WithGroupLabels(ctx, ag.labels)
ctx = notify.WithReceiverName(ctx, ag.opts.Receiver)
ctx = notify.WithRepeatInterval(ctx, ag.opts.RepeatInterval)
ctx = notify.WithMuteTimeIntervals(ctx, ag.opts.MuteTimeIntervals)
ctx = notify.WithActiveTimeIntervals(ctx, ag.opts.ActiveTimeIntervals)

ag.flush(func(alerts ...*types.Alert) bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't had time to look at the other changes, but I think there is a subtle bug here. Prior to this change, the timer was reset before the flush, so Group Interval is unaffected by the duration of the notify function. However, now the timer is reset after the flush. This means Group Interval is reset relative to the duration of the notify function instead of relative to the time of the previous flush. Won't this cause Group interval to drift over time?

Copy link
Contributor Author

@zecke zecke May 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I argue s/bug/trade-off/. It picks "success" (or timeout) as the point of last group notification. It might even match user experience better. It counts from the time I got paged (over the time it started to try to page me). On top of that we have no test case that tries to capture the drifting behavior.

In terms of drift we also have the small race for ag.next.Reset(0) and configuration reload (that ends up creating a whole new Dispatcher). The drift is larger in case of errors (where might end up skipping a notification anyway).

The alternative to the above is that we start tracking both the success and start time in the nflogpb.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that it breaks Alertmanager HA which relies on the Group wait and Group interval timers being in lock-step on all replicas for the same aggregation group. If those timers drift, then de-duplication in dedupStage will stop working.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I am missing but to my understanding the only communication between the replicas in the HA set-up is the nflog? In specific the dedup stage today relies on return entry.Timestamp.Before(n.now().Add(-repeat)) if no alerts changed and config was reloaded.

Today config reload is probably one of the likeliest places the group interval can drift. We re-create the dispatcher and the pipeline can run earlier than the group interval. And that is kind of the bug we see during config reload. The pipeline runs earlier and sees unrelated changes makes a notification, makes a new notification.

What do you think about recording the start of the notification and then using it to set the timer accordingly?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I am missing but to my understanding the only communication between the replicas in the HA set-up is the nflog? In specific the dedup stage today relies on return entry.Timestamp.Before(n.now().Add(-repeat)) if no alerts changed and config was reloaded.

That's correct!

However, there is also an implicit assumption in Alertmanager that the Group interval for each aggregration group is synchronized across all Alertmanagers in an HA set-up for dedup to work. It looks like this:

Untitled-2024-01-10-1034

You can see here that AM2 and AM3 wait peer_timeout and 2*peer_timeout seconds in the wait stage. This is how Alertmanager failover works in case AM1 or AM2 cannot send a notification. However, this behavior relies on Group wait and Group interval timers firing at the same time. Alertmanagers do not communicate with each other to synchronize these timers, but instead use the time an alert was received from Prometheus as Prometheus broadcasts each alert to all Alertmanagers.

However, if the timers become de-synchronized then the wait stage stops working and each Alertmanager could enter the dedup and retry stages at the same time, causing duplicate notifications.

There are a number of issues with this, but the important issue here is that config reloads can cause the timers to be de-synchronized if the config is reloaded at different times. I think this is where the fix needs to be.

Copy link
Contributor Author

@zecke zecke May 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point on synchronizing config reload time across cluster members. I think this is an additional problem as we can see that a config reload triggers an ag.flush before the expiry of the group interval. The acceptance test that I added shows the issue with a single instance.

I have revised the patch to carry the time of the ag.flush and use it to "mute" early notifications. If we want to we can build on it and use it to synchronize the timer across the instances and config reload.

Something like this:

  1. Config reload
  2. New Dispatcher, new aggregation groups
  3. ag.next.Reset(0)
  4. Flush running earlier than group_interval
  5. Dedup stage "swallowing this" and returning the desired time..
  6. dispatcher using the return value to set ag.next accordingly...

I find preferable over reading the nflog in Dispatcher directly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a think about this and I think we need to restore the timer. If we don't then, in the worst case, this patch will delay notifications for up to 2 * group_interval. It's not unusual for users to set large group intervals (several hours). For example, consider the following configuration:

receiver: test
route:
  receiver: test
  group_wait: 15s
  group_interval: 5m
  repeat_interval: 4h
  1. An alert is received at time t. The Group wait timer is started.
  2. The Group wait timer expires at t + 15s, a flush occurs and Alertmanager sends a notification. The next Group interval will occur at t + 5m15s.
  3. A second alert is received at time t + 30s.
  4. Alertmanager is reloaded at time t + 5m10s. A flush occurs, however it is ignored because it is 5 seconds before the expected Group interval. The next Group interval will occur at t + 10m15s.
  5. At time t + 10m15s, the Group interval timer expires. Another flush occurs, and this time a notification is sent.

In this example a notification should have been sent 5m after the first notification, but instead was sent 10 minutes later at 10m15s. Instead of sending notifications too soon Alertmanager is now sending notifications too late?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The above can happen. We could run flush more frequently and the downside is more CPU and the notification would still be off.

What makes this a bit hard is that the dispatcher doesn't have access to the nflogpb.Entry and that the entry is per receiver.

Given config reload, restart of alertmanager instances I can see two approaches.

  1. We extend Stage.Exec to return the time the pipeline should run the next time. The FanoutState/MultiStage can then return the closest non-zero timestamp.
  2. Extend the Stage interface to have a function that returns the next run time. We can either synchronize once (like we do this today) or continously.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opted for the 2nd option. One of the acceptance tests required updates now that we "restore" the timer. I also changed that the repeat interval is based on dispatch time and not when the notification succeeded.

I plan on adding a few more tests but would appreciate some early comments.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a test for the following configuration:

route:
    group_wait: 30s
    group_interval: 5m
    repeat_interval: 4h

Where the reload happens 10 mins after the notification is sent. I think in this case the code will do an immediate flush due to Line 455 of dispatch.go when I assume it should restore the timer instead?

The issue is that nflog is not updated each time the aggregation group is flushed, just when a notification is sent.

This is where I'm not sure storing the flush timestamp in the nflog makes sense because the nflog is specific to notification deduplication.

What I'm thinking instead is can we get the existing timers from the old aggregation groups at reload time and passing it to the new groups? Groups which have had their time intervals changed will need to start again from Group wait.

Add an acceptance test that triggers a config reload and verifies
that no early notification is occurring.

One of the challenges is which time to use to check for a previous
notification. The nflog captures about the time all notifications
were sent. That conflicts with the ag.next timer that get's reset
before the ag is being flushed. Delays and retries can make these
two point in time be different enough for the integration tests to
fail.

I considered the following ways to fix it:

  1.) Modify the nflog.proto to capture the flush time in addition
      to the successful notification time.
  2.) In addition to hasFlushed capture the last flush time and pass
      it to the context like we do for Now.
  3.) Set hashFlushed and reset the timer after the flush has been
      done.

I started with prometheus#3 as it seemeded to have the fewest downsides with
things like drift. Based on comments this is no prometheus#1.

needsUpdate is based on:
prometheus#3074 (comment)

Signed-off-by: Holger Hans Peter Freyther <holger@freyther.de>
zecke added 2 commits May 19, 2024 08:57
When creating an AggregationGroup consult the Stage about the
next expected time to `Exec` and use it to set the initial timer.

Update the acceptance test to match the new behavior and add a
test to receive the updated group at group_interval timeout.

Signed-off-by: Holger Hans Peter Freyther <holger@freyther.de>
Use the dispatch time instead of the last successful notification
for the repeat interval. This avoids a drift and also keeps the
same time during notifications.

Signed-off-by: Holger Hans Peter Freyther <holger@freyther.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants