On 17/07/16 20:50, Robert Haas wrote:
It's the same with cluster-wide management, dump and restore of replication
state to re-create a replication setup elsewhere, etc. We have to build the
groundwork first. Trying to pour the top storey concrete when the bottom
storey isn't even there yet isn't going to work out. You've argued
effectively the same thing elsewhere, saying that the pglogical submission
tried to do too much and should be further cut down.
Absolutely. I didn't mean to imply that the scope of that submission
should be expanded. What I'm a bit concerned about is that maybe we
haven't given enough thought to how some of this stuff is going to
work. Petr replied earlier with an assertion that all of the things
that I mentioned could be done using pglogical, but I'm not convinced.
I don't see how you can use pglogical to build a self-healing
replicated cluster, which is ultimately what people want. Maybe
that's just because I'm not personally deeply enmeshed in that
project, but I do try to read all of the relevant threads on
pgsql-hackers and keep tabs on what is happening.
That really depends on what you call self-healing replicated cluster.
Suppose server A is publishing to server B. Well, clearly, A needs to
have a slot for server B, but does that slot correspond precisely to
the publication, or is that represented in some other way? How is the
subscription represented on server B? What happens if either A or B
undergoes a dump-and-restore cycle? It's just fuzzy to me how this
stuff is supposed to work in detail.
Yeah, that's because it is. The dump/restore cycle would work provided
you stopped the replication before doing it. That might not be perfect
but it's still more than physical can do. Solving more complex scenarios
is something for the future. Logical PITR might be the answer for that,
not sure yet.
About slots. Slots are just primitive which helps us to get snapshot
mapped to LSN, keep the historical catalog and wal files. There is no
reason for single replication path to be forever tied to single slot
though. In fact in pglogical (and my plan for core is same) we already
create limited lifespan slots to get new snapshots when either adding
new table or re-syncing the existing one. This is one of the reasons why
I don't really see much usefulness in being able to do snapshot inside
the slot once it started replicating already btw, using multiple slots
has also advantage of parallelism (replication of other tables does not
lag because we are syncing another one).
It does make me wonder if we should look at extension points for the
walsender protocol though, now we're like to have a future desire for newer
versions to connect to older versions - it'd be great if we could do
something like pg_upgrade_support to allow an enhanced logical migration
from 10.0 to 11.0 by installing some extension in 10.0 first.
Maybe, but let's get something that can work from >=10.0 to >=10.0 first.
Agreed.
--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers