Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed postgresql>=10 secondary server lag always 0, SuperQ proposed a… #977

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Efiartes
Copy link

Fixed postgresql>=10 secondary server lag always 0, SuperQ proposed a more clean code solution :), pg_replication_test modified to test pgReplicationQueryBeforeVersion10 or pgReplicationQueryAfterVersion10 depending of the postgresql version

… more clean code solution :), pg_replication_test modified to test pgReplicationQueryBeforeVersion10 or pgReplicationQueryAfterVersion10 depending of the postgresql version

Signed-off-by: kr0m <kr0m@Garrus.alfaexploit.com>
Copy link
Contributor

@SuperQ SuperQ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, Thanks!

@SuperQ
Copy link
Contributor

SuperQ commented Nov 29, 2023

To add some more information here. It was discovered that if replication is failed, say blocked with an iptables firewall rule, pg_last_wal_receive_lsn() and pg_last_wal_replay_lsn() will remain the same. This masks the lag value.

IMO, we should probably expose separate metrics for each of these things rather than compute it in the exporter.

@SuperQ
Copy link
Contributor

SuperQ commented Nov 29, 2023

This is a fix to the bug introduced by #895.

CC @IamLuksha.

@IamLuksha
Copy link
Contributor

Maybe I'm wrong
There is no way to check and test right now
But in versions above 9.3 pg_last_xact_replay_timestamp does not work correctly. And in the proposed commit it is used for versions above 10

The case of channel blocking must be considered separately. Check connection between devices

@SuperQ
Copy link
Contributor

SuperQ commented Nov 29, 2023

@IamLuksha Are there recommended metrics/alerts for monitoring replication? Server side? Client side?

I think I am beginning to understand some of how this is being done. I'm going to propose an additional metric that will will help with this.

Maybe we should add some example alerts to the mixin.

@IamLuksha
Copy link
Contributor

IamLuksha commented Nov 29, 2023

@SuperQ

Are there recommended metrics/alerts for monitoring replication? Server side? Client side?

Myabe reply_time. I need some time for test. Maybe next week. I don't have a working cluster right now

An additional metric is a great idea. Because it will clearly indicate the problem.
Since this commit will return issue #895

@Efiartes
Copy link
Author

Efiartes commented Nov 30, 2023

Are there recommended metrics/alerts for monitoring replication? Server-side? Client-side?
If you want to monitor lag looking at the Primary server I think it would be better to monitor replay_lag but if there are more than one synchronized Secondary servers you have to know the IP/FQDN of the Secondary to assign an identification tag to the metric.

You can detect broken synchronization on Secondary servers, if you know It's IP/FQDN:
SELECT COUNT(*) FROM pg_stat_replication WHERE client_addr='SLAVE_IP' AND state = 'streaming';

And you can get the LAG using replay_lag metric in that way:
SELECT COALESCE(EXTRACT(EPOCH FROM replay_lag)::bigint, 0) AS replay_lag FROM pg_stat_replication WHERE client_addr='SLAVE_IP';

Anyway, with the current Secondary metric monitoring, we can monitor lag if we apply the proposed patch, there are two queries, one for PostgreSQL>=10 and another for PostgreSQL<10.

The old versions will execute:
pgReplicationQueryBeforeVersion10 = SELECT
CASE
WHEN NOT pg_is_in_recovery() THEN 0
WHEN pg_last_wal_receive_lsn () = pg_last_wal_replay_lsn () THEN 0
ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
END AS lag,
CASE
WHEN pg_is_in_recovery() THEN 1
ELSE 0
END as is_replica

And the new versions will execute:
pgReplicationQueryAfterVersion10 = SELECT
CASE
WHEN NOT pg_is_in_recovery() THEN 0
ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
END AS lag,
CASE
WHEN pg_is_in_recovery() THEN 1
ELSE 0
END as is_replica

I think that by monitoring pg_up(Primary/Secondary) and pg_replication_lag_seconds(Secondary) metric we have all the troubleshooting conflicts covered.

@IamLuksha
Copy link
Contributor

Anyway, with the current Secondary metric monitoring, we can monitor lag if we apply the proposed patch, there are two queries, one for PostgreSQL>=10 and another for PostgreSQL<10.

This method does not work and was abandoned earlier
If there are no changes in the database for a long time and this is normal
We will get a replication error

@Efiartes
Copy link
Author

@IamLuksha This method does not work in postgresql>=10 or postgresql<10 or both?

@IamLuksha
Copy link
Contributor

@IamLuksha This method does not work in postgresql>=10 or postgresql<10 or both?

both

		WHEN NOT pg_is_in_recovery() THEN 0
                WHEN pg_last_wal_receive_lsn () = pg_last_wal_replay_lsn () THEN 0
		ELSE GREATEST (0, EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))

This code for version > 9.3. But you propose to use it for <10

And your 'postgresql>=10' will give a replication error with an empty database or when it is rarely used.
In the end, having solved your problem, you return the old ones

In your case you need to check the connection between the servers

@Efiartes
Copy link
Author

@IamLuksha Correct me if I am wrong.

It can be two kind of lag:

  • Local-Secondary server lag, the information was received by the Secondary server and written to disk but it hasn't been already dumped to DB.
  • Network lag between Primary and Secondary servers.

https://www.postgresql.org/docs/current/functions-admin.html

For Local lag we can check:
pg_last_wal_receive_lsn (): Returns the last write-ahead log location that has been received and synced to disk by streaming replication.
pg_last_wal_replay_lsn (): Returns the last write-ahead log location that has been replayed during recovery.

For Network lag:
pg_last_xact_replay_timestamp (): Returns the time stamp of the last transaction replayed during recovery. This is the time at which the commit or abort WAL record for that transaction was generated on the primary.

The problem with using pg_last_xact_replay_timestamp is that it remains the same value when there's no activity in the Primary server.

Have I understood the whole problem correctly?

@IamLuksha
Copy link
Contributor

Have I understood the whole problem correctly?

Yes!

So we need to check pg_last_wal_receive_lsn () = pg_last_wal_replay_lsn ()

@Efiartes
Copy link
Author

When you say: In your case you need to check the connection between the servers

Check the connection using node_exporter or any other external monitoring system? Or querying some PosqtgreSQL data?

@IamLuksha
Copy link
Contributor

Check the connection using node_exporter or any other external monitoring system? Or querying some PosqtgreSQL data?

Every method can be used

I don't remember if Postgres has a method to check the connection.
I won't be able to check until next week.

@Efiartes
Copy link
Author

Efiartes commented Dec 4, 2023

Hello @IamLuksha , what do you think about monitoring lag and availability from Primary server?

You can detect broken synchronizations, if you know It's IP/FQDN:
SELECT COUNT(*) FROM pg_stat_replication WHERE client_addr='SECONDARY_IP' AND state = 'streaming';

And you can get the LAG using replay_lag metric in this way:
SELECT COALESCE(EXTRACT(EPOCH FROM replay_lag)::bigint, 0) AS replay_lag FROM pg_stat_replication WHERE client_addr='SECONDARY_IP';

The only problem that I can detect in this way is that when the Secondary server is unavailable from Primary, pg_stat_replication returns no results, so it simply disappears, the only solution that I have thought is saving previous watched Secondary servers in a list file, and if someone of them disappears, then trigger an alarm.

Do you think it's a worthy approximation? Any suggestion or solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants