As part of the commit operation, Postgres inserts the notification into a
queue. Naturally, that insert is guarded by a lock and that lock is
released only at the very end of the commit operation. This effect gets
much worse if you also configure synchronous replication because commit
finishes only when the desired sync level has been achieved.

An alternative solution could be to receive notification through the WAL
stream. Use pg_revclogical to connect to a replication slot configured with
wal2json. Then you can send notifications with pg_logical_emit_message()
for instance which even allows you to send notifications from aborted
transactions.

Setup:
SELECT * FROM pg_create_logical_replication_slot('w2j', 'wal2json');

Here is a sender:
CREATE SCHEMA notify;
CREATE OR REPLACE FUNCTION notify.send(
    transactional BOOLEAN,
    channel TEXT,
    message TEXT
) RETURNS PG_LSN AS $$
    SELECT pg_logical_emit_message(
               transactional,
               'NOTIFY_'||channel,
               jsonb_build_object(
                   'txn_start', now()::TEXT,
                   'pid', pg_backend_pid(),
                   'message', message
               )::TEXT
           );
$$ LANGUAGE sql VOLATILE;

And something like this as a receiver / distributor:
sudo -iu postgres /usr/lib/postgresql/$VERSION/bin/pg_recvlogical \
     -d 'port=5432 dbname=your_database' \
     -S w2j \
     -f - \
     -F 0 \
     --start \
     -o include-xids=1 \
     -o include-timestamp=1 \
     -o add-tables=notify.\* \
     -o format-version=2 |
jq 'select(.action=="M" and (.prefix | test("^NOTIFY_"))) |
    . as $x |
    .content |
    fromjson + {
        "channel": ($x.prefix | .[7:]),
        "xid": $x.xid,
        "txn_commit": $x.timestamp,
        "transactional": $x.transactional
    }'

Note, this is not a 100% replacement. The semantics are slightly different.
You also need a replication slot which comes with its own maintenance cost.
But maybe it's worth a try.

Have fun,
Torsten

On Tue, Apr 30, 2024 at 12:05 PM Dimitris Zenios <dimitris.zen...@gmail.com>
wrote:

> Hi,
>
> I am measuring a very simple case of pg_notify in an after update trigger.
> The trigger is the following:
>
> CREATE
>     OR REPLACE FUNCTION audit_event() RETURNS TRIGGER AS
> $$
> BEGIN
>     PERFORM pg_notify('user', 'hello world');
>     RETURN NULL;
> END;
> $$
>     LANGUAGE plpgsql;
>
> and configured on the table with the following sql
>
> CREATE CONSTRAINT TRIGGER trigger_test AFTER UPDATE ON pgbench_accounts
> FOR EACH ROW EXECUTE PROCEDURE audit_event()
>
> I am running two benchmarks. One with the pg_notify on the trigger enabled
> and one with the notify commented out.
>
> The command is the following:
> pgbench -f /tmp/update_bench.sql  -c 10 -j 10 -t 100 benchdb
>
> And the content of the update_bench.sql are the following
>
> \set aid random(1, 100000 * :scale)
> \set delta random(-5000, 5000)
> BEGIN;
> UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
> END;
>
> Results are
>
> - With pg_notify disabled
>
> pgbench (16.1)
> starting vacuum...end.
> transaction type: /tmp/update_bench.sql
> scaling factor: 1
> query mode: simple
> number of clients: 10
> number of threads: 10
> maximum number of tries: 1
> number of transactions per client: 100
> number of transactions actually processed: 1000/1000
> number of failed transactions: 0 (0.000%)
> latency average = 11.744 ms
> initial connection time = 15.616 ms
> tps = 851.531991 (without initial connection time)
>
> - With pg_notify enabled
>
> pgbench (16.1)
> starting vacuum...end.
> transaction type: /tmp/update_bench.sql
> scaling factor: 1
> query mode: simple
> number of clients: 10
> number of threads: 10
> maximum number of tries: 1
> number of transactions per client: 100
> number of transactions actually processed: 1000/1000
> number of failed transactions: 0 (0.000%)
> latency average = 56.927 ms
> initial connection time = 11.182 ms
> tps = 175.664989 (without initial connection time)
>
> There is a huge drop in TPS from 851 to 175.
>
> I also noticed that if I run the test with a single connection -c 1 then
> the results are nearly identical which makes me assume that this is a
> contention that occurs between multiple connections.
>
> Thanks
>
>

Reply via email to