Hi.
I'm running postgres in kubernetes.
Is it possible to emit csvlog to standard output/error?
I tried a naive & hacky work-around, by setting log_directory='/dev/fd' and
log_filename='2' - but it does not work.
Thanks!
ially when time sync on both servers is not accurate. In
my case, destination server clock was few minutes in future.
So the pg_clog was broken due to this. Which means a completely corrupted
database.
thanks Stephen & Andres for your responses.
On Mon, Feb 25, 2019 at 8:06 PM Filip
On Tue, Feb 26, 2019 at 2:39 AM Andres Freund wrote:
>
> > 2. base backup is transferred directly to new server using
> > pg_start_backup + rsync + pg_stop_backup.
>
I excluded contents of pg_xlog only. Exact command was:
# start script
psql -Xc "select pg_start_backup('mirror to $standby',
On Mon, Feb 25, 2019 at 11:45 PM Stephen Frost wrote:
>
> Greetings,
>
> * Filip Rembiałkowski (filip.rembialkow...@gmail.com) wrote:
> > There is a large (>5T) database on PostgreSQL 9.0.23.
>
> First off, I hope you understand that 9.0 has been *long* out of
> supp
No - because there's no "out-of-the box" solution that creates two
replicas, both writable. It's said on
https://www.postgresql.org/docs/11/different-replication-solutions.html
Yes - because with current postgres features (logical rep, partitions,
foreign tables, ...) you can create solutions
Hi.
There is a large (>5T) database on PostgreSQL 9.0.23.
I would like to setup new WAL-shipping standby.
https://www.postgresql.org/docs/9.0/warm-standby.html
On my way I find unexpected issues. Here's the story, in short:
1. WAL archiving to remote archive is setup & verified
2. base backup