On 24.11.21 17:25, Dimitri Fontaine wrote:
Is there a case to be made about doing the same thing for physical
replication slots too?

It has been considered. At the moment, I'm not doing it, because it would add more code and complexity and it's not that important. But it could be added in the future.

Given the admitted state of the patch, I didn't focus on tests. I could
successfully apply the patch on-top of current master's branch, and
cleanly compile and `make check`.

Then I also updated pg_auto_failover to support Postgres 15devel [2] so
that I could then `make NODES=3 cluster` there and play with the new
replication command:

   $ psql -d "port=5501 replication=1" -c "LIST_SLOTS;"
   psql:/Users/dim/.psqlrc:24: ERROR:  XX000: cannot execute SQL commands in 
WAL sender for physical replication
   LOCATION:  exec_replication_command, walsender.c:1830
   ...

I'm not too sure about this idea of running SQL in a replication
protocol connection that you're mentioning, but I suppose that's just me
needing to brush up on the topic.

FWIW, the way the replication command parser works, if there is a parse error, it tries to interpret the command as a plain SQL command. But that only works for logical replication connections. So in physical replication, if you try to run anything that does not parse, you will get this error. But that has nothing to do with this feature. The above command works for me, so maybe something else went wrong in your situation.

Maybe the first question about configuration would be about selecting
which slots a standby should maintain from the primary. Is it all of the
slots that exists on both the nodes, or a sublist of that?

Is it possible to have a slot with the same name on a primary and a
standby node, in a way that the standby's slot would be a completely
separate entity from the primary's slot? If yes (I just don't know at
the moment), well then, should we continue to allow that?

This has been added in v2.

Also, do we want to even consider having the slot management on a
primary node depend on the ability to sync the advancing on one or more
standby nodes? I'm not sure to see that one as a good idea, but maybe we
want to kill it publically very early then ;-)

I don't know what you mean by this.


Reply via email to