Hi
I have some strange issues with a postgresql read replica that seems to stop
replicating under certain circumstances.
Whenever we have changes to our views we have script that drops all views and
reload them from scratch with the new definitions. The reloading of the views
happens in a tran
On Tue, Jun 6, 2017 at 1:52 PM, Bhattacharyya, Subhro
wrote:
> Our expectation is that slave will be able to sync with the new master with
> the help of whatever WALs are present in the new master due to replication
> slots.
> Can pg_rewind still work without WAL archiving in this scenario.
I s
-general@postgresql.org
Subject: Re: [GENERAL] Replication slot and pg_rewind
On Tue, Jun 6, 2017 at 12:03 PM, Bhattacharyya, Subhro
wrote:
> We are using the replication slot and pg_rewind feature of postgresql 9.6
> Our cluster consists of 1 master and 1 slave node.
>
> The replication
On Tue, Jun 6, 2017 at 12:03 PM, Bhattacharyya, Subhro
wrote:
> We are using the replication slot and pg_rewind feature of postgresql 9.6
> Our cluster consists of 1 master and 1 slave node.
>
> The replication slot feature allows the master to keep as much WAL as is
> required by the slave.
>
> T
We are using the replication slot and pg_rewind feature of postgresql 9.6
Our cluster consists of 1 master and 1 slave node.
The replication slot feature allows the master to keep as much WAL as is
required by the slave.
The pg_rewind command uses WALs to bring the slave in sync with the master
2017-01-04 18:24 GMT+01:00 Adrian Klaver :
> On 01/04/2017 08:44 AM, Tom DalPozzo wrote:
>
>> Hi,
>>
>
> Postgres version?
>
> Because in 9.6:
>
> https://www.postgresql.org/docs/9.6/static/functions-admin.h
> tml#FUNCTIONS-REPLICATION
>
> Table 9-82. Replication SQL Functions
>
> pg_create_physic
On 01/04/2017 08:44 AM, Tom DalPozzo wrote:
Hi,
Postgres version?
Because in 9.6:
https://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-REPLICATION
Table 9-82. Replication SQL Functions
pg_create_physical_replication_slot(slot_name name [,
immediately_reserve boolean ])
Hi,
I've got my primary and I make a pg_basebackup -x in order to create a
standby.
I can connect my standby only later, in some hours, so I'd like the master
to keep new WALs but I don't like to use archiving nor keep-segments
option. I thought to do it through a physical replication slot (my stan
Thanks a lot. That s what I was looking for ;)
Yes, I was trying to avoid logical replication. I guess it s time for me to
delve into it...
On Thu, 1 Dec 2016 20:11:06 +0100
Benoit Lobréau wrote:
> Hi,
>
> Is it possible to use the built in replication to replicate between two
> PostgreSQL in the same version but in different version of the same OS (Say
> Pg 9.1 Ubuntu 12 to Pg 9.1 Ubuntu 14)
>
> I think I read in Hackers that sinc
Hi,
Is it possible to use the built in replication to replicate between two
PostgreSQL in the same version but in different version of the same OS (Say
Pg 9.1 Ubuntu 12 to Pg 9.1 Ubuntu 14)
I think I read in Hackers that since PostgreSQL uses the OS libraries for
encoding. It could cause silent c
We would like to have a master(read/write) version of a database (or a
schema or two) on one server and a readonly version of of the same
database. The only changed on the second one may be to duplicate changes
to views, materialized_views and indexes that also happened on the first
one.
We work
> You need to look at a replication solution like Slony, which is a trigger
> based replication solution. If you are using PostgreSQL version 9.4 or
> higher, then, you can explore "pglogical" which is WAL based and uses
> logical decoding capability.
I'm using 9.4, and I'm looking at pglogical as
On Thu, Nov 3, 2016 at 8:17 PM, Dmitry Karasik
wrote:
> Dear all,
>
> I'd like to ask for help or advice with choosing the best replication
> setup for
> my task.
>
> I need to listen to continuous inserts/deletes/updates over a set of
> tables,
> and serve them over http, so I would like to off-
Dear all,
I'd like to ask for help or advice with choosing the best replication setup for
my task.
I need to listen to continuous inserts/deletes/updates over a set of tables,
and serve them over http, so I would like to off-load this procedure to a
separate slave machine. I thought that logical
See also https://github.com/2ndQuadrant/bdr/issues/233
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Increase wal_sender_timeout to resolve the issue.
I've been investigating just this issue recently. See
https://www.postgresql.org/message-id/camsr+ye2dsfhvr7iev1gspzihitwx-pmkd9qalegctya+sd...@mail.gmail.com
.
It would be very useful to me to know more about the transaction that
caused this prob
Hi there,
We have some problems with BDR and would appreciate any hints and advice with
it. Here's the short story:
We are testing BDR with PostgreSQL 9.4 and it seems to work quite ok after
getting it up and running, but we ran into a quite disturbing weakness also. A
basic two node cluster b
On Mon, Oct 24, 2016 at 2:20 PM, Dasitha Karunajeewa
wrote:
> Hi Michael,
>
> Thanks a lot for the information. I am totally new to Postgres clustering. I
> have follow up the below-attached article.
> https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-on-postg
On Fri, Oct 21, 2016 at 10:10 PM, Dasitha Karunajeewa
wrote:
> What I want to know is how to switch them back to the normal status. That
> means pgmaster need to be the Master server which acept the writed and
> pgslave that accepts only reads along with the replication.
If your promoted standby
Dear Team,
I have installed PostgreSQL 9.6 on two servers. One is master and other is
for slave server. Current setup as follows.
- Master Server - pgmaster
- Salve Server - pgslave
To implement this I have followed this article
https://www.digitalocean.com/community/tutorials/how-to-set-
On Mon, Sep 26, 2016 at 7:49 PM, hariprasath nallasamy
wrote:
>We are using replication slot for capturing some change sets to
> update dependent tables.
>Will there be inconsistency if the master fails and the standby takes
> the role of master.?
Replication slot creation is not
Hi all
We are using replication slot for capturing some change sets to
update dependent tables.
Will there be inconsistency if the master fails and the standby
takes the role of master.?
cheers
-harry
On Mon, Sep 12, 2016 at 3:46 PM, Lee Hachadoorian
wrote:
> * Because database is updated infrequently, workforce can come
> together for LAN-based replication as needed
> * Entire database is on the order of a few GB
Just update one copy, then send pg_dump's to the others for stomping
over the ol
On 09/12/2016 02:35 PM, Lee Hachadoorian wrote:
On Mon, Sep 12, 2016 at 5:12 PM, Adrian Klaver
wrote:
On 09/12/2016 12:46 PM, Lee Hachadoorian wrote:
There are a wide variety of Postgres replication solutions, and I
would like advice on which one would be appropriate to my use case.
* Small
On Mon, Sep 12, 2016 at 5:12 PM, Adrian Klaver
wrote:
> On 09/12/2016 12:46 PM, Lee Hachadoorian wrote:
>>
>> There are a wide variety of Postgres replication solutions, and I
>> would like advice on which one would be appropriate to my use case.
>>
>> * Small (~half dozen) distributed workforce u
On 09/12/2016 12:46 PM, Lee Hachadoorian wrote:
There are a wide variety of Postgres replication solutions, and I
would like advice on which one would be appropriate to my use case.
* Small (~half dozen) distributed workforce using a file sharing
service, but without access to direct network con
There are a wide variety of Postgres replication solutions, and I
would like advice on which one would be appropriate to my use case.
* Small (~half dozen) distributed workforce using a file sharing
service, but without access to direct network connection over the
internet
* Database is updated in
uot;
To: "Nick Babadzhanian"
Cc: "pgsql-general"
Sent: Wednesday, July 6, 2016 11:00:05 PM
Subject: Re: [GENERAL] Replication with non-read-only standby.
2016-06-30 15:15 GMT+02:00 Nick Babadzhanian :
> Setup:
> 2 PostgreSQL servers are geographically spread. The fi
2016-06-30 15:15 GMT+02:00 Nick Babadzhanian :
> Setup:
> 2 PostgreSQL servers are geographically spread. The first one is used for
> an application that gathers data. It is connected to the second database
> that is used to process the said data. Connection is not very stable nor is
> it fast, so
Il 01/07/2016 05:21, Venkata Balaji N
ha scritto:
On Thu, Jun 30, 2016 at 11:15 PM,
Nick Babadzhanian wrote:
Setup:
2 PostgreSQL servers are geographically spread. The first
one
On Thu, Jun 30, 2016 at 11:15 PM, Nick Babadzhanian wrote:
> Setup:
> 2 PostgreSQL servers are geographically spread. The first one is used for
> an application that gathers data. It is connected to the second database
> that is used to process the said data. Connection is not very stable nor is
On Thu, Jun 30, 2016 at 7:15 AM, Nick Babadzhanian wrote:
> Setup:
> 2 PostgreSQL servers are geographically spread. The first one is used for an
> application that gathers data. It is connected to the second database that is
> used to process the said data. Connection is not very stable nor is
Setup:
2 PostgreSQL servers are geographically spread. The first one is used for an
application that gathers data. It is connected to the second database that is
used to process the said data. Connection is not very stable nor is it fast, so
using Bidirectional replication is not an option. It i
Hi,
Thx you for answering.
Regards,
Bertrand
2016-06-06 10:22 GMT+02:00 Vik Fearing :
> On 06/06/16 09:54, Masahiko Sawada wrote:
> > On Sat, Jun 4, 2016 at 10:58 PM, Vik Fearing wrote:
> >> On 02/06/16 15:32, Bertrand Paquet wrote:
> >>> Hi,
> >>>
> >>> On an hot standby streaming server, is
On 06/06/16 09:54, Masahiko Sawada wrote:
> On Sat, Jun 4, 2016 at 10:58 PM, Vik Fearing wrote:
>> On 02/06/16 15:32, Bertrand Paquet wrote:
>>> Hi,
>>>
>>> On an hot standby streaming server, is there any way to know, in SQL, to
>>> know the ip of current master ?
>>
>> No.
>>
>>> The solution I
On Sat, Jun 4, 2016 at 10:58 PM, Vik Fearing wrote:
> On 02/06/16 15:32, Bertrand Paquet wrote:
>> Hi,
>>
>> On an hot standby streaming server, is there any way to know, in SQL, to
>> know the ip of current master ?
>
> No.
>
>> The solution I have is to read the recovery.conf file to find
>> pri
On 02/06/16 18:39, John R Pierce wrote:
> On 6/2/2016 6:32 AM, Bertrand Paquet wrote:
>> On an hot standby streaming server, is there any way to know, in SQL,
>> to know the ip of current master ?
>> The solution I have is to read the recovery.conf file to find
>> primary_conninfo, but, it can be f
On 02/06/16 15:32, Bertrand Paquet wrote:
> Hi,
>
> On an hot standby streaming server, is there any way to know, in SQL, to
> know the ip of current master ?
No.
> The solution I have is to read the recovery.conf file to find
> primary_conninfo,
That is currently the only solution. There are
On 6/2/2016 6:32 AM, Bertrand Paquet wrote:
On an hot standby streaming server, is there any way to know, in SQL,
to know the ip of current master ?
The solution I have is to read the recovery.conf file to find
primary_conninfo, but, it can be false.
"The IP" assumes there is only one... hos
On Thu, 2 Jun 2016, 10:34 p.m. Scott Mead, wrote:
> On Thu, Jun 2, 2016 at 10:16 AM, Melvin Davidson
> wrote:
>
>> It's been a few years since I worked with slony, and you did not state
>> which version of slony or PostgreSQL you are working with, nor did you
>> indicate the O/S.
>>
>
> I think
On Thu, Jun 2, 2016 at 10:16 AM, Melvin Davidson
wrote:
> It's been a few years since I worked with slony, and you did not state
> which version of slony or PostgreSQL you are working with, nor did you
> indicate the O/S.
>
I think OP had pointed to using streaming
> That being said, you s
It's been a few years since I worked with slony, and you did not state
which version of slony or PostgreSQL you are working with, nor did you
indicate the O/S.
That being said, you should be able to formulate a query with a join
between sl_path & sl_node that gives you the information you need.
On
Hi,
On an hot standby streaming server, is there any way to know, in SQL, to
know the ip of current master ?
The solution I have is to read the recovery.conf file to find
primary_conninfo, but, it can be false.
Regards,
Bertrand
Thanks for the help. We need an upgrade on the DB for the solution. I
checked your suggestion and it works on versions from 9.1 and above
Regards
On Thu, Jan 28, 2016 at 12:04 PM, Andreas Kretschmer <
akretsch...@spamfence.net> wrote:
> Bala Venkat wrote:
>
> > Hi there -
> >
> >We
Bala Venkat wrote:
> Hi there -
>
> We have a set up where there is One master streaming to 3 Slaves .
> 2 slaves are in our DR environment. One is the prod environment.
>
> Wanted to make the DR as primary. I know we can make the one of the
> slave in DR to primary. If I
Hi there -
We have a set up where there is One master streaming to 3 Slaves
. 2 slaves are in our DR environment. One is the prod environment.
Wanted to make the DR as primary. I know we can make the one of
the slave in DR to primary. If I want to keep the other slave as sla
On Tue, Oct 6, 2015 at 12:27 PM, Thomas Munro wrote:
> On Sun, Oct 4, 2015 at 11:47 PM, Michael Paquier
> wrote:
> > (Seems like you forgot to push the Reply-all button)
> >
> > On Sun, Oct 4, 2015 at 7:01 PM, Madovsky wrote:
> >> On 10/3/2015 3:30 PM, Michael Paquier wrote:
> >>> and no reason
On Sun, Oct 4, 2015 at 11:47 PM, Michael Paquier
wrote:
> (Seems like you forgot to push the Reply-all button)
>
> On Sun, Oct 4, 2015 at 7:01 PM, Madovsky wrote:
>> On 10/3/2015 3:30 PM, Michael Paquier wrote:
>>> and no reason is given to justify *why* this would be needed in your case
>> reaso
On 10/4/2015 3:47 AM, Michael Paquier wrote:
(Seems like you forgot to push the Reply-all button)
On Sun, Oct 4, 2015 at 7:01 PM, Madovsky wrote:
On 10/3/2015 3:30 PM, Michael Paquier wrote:
and no reason is given to justify *why* this would be needed in your case
reason for a choice can
(Seems like you forgot to push the Reply-all button)
On Sun, Oct 4, 2015 at 7:01 PM, Madovsky wrote:
> On 10/3/2015 3:30 PM, Michael Paquier wrote:
>> and no reason is given to justify *why* this would be needed in your case
> reason for a choice can be often an issue for other :D
>
> I thought t
On Sun, Oct 4, 2015 at 6:38 AM, Madovsky wrote:
> On 10/3/2015 6:55 AM, Michael Paquier wrote:
>> On Sat, Oct 3, 2015 at 10:20 PM, Madovsky wrote:
>> Requesting the master would be necessary, still I don't really get why
>> you don't want to query the master for read queries... You could for
>> exa
On 10/3/2015 6:55 AM, Michael Paquier wrote:
On Sat, Oct 3, 2015 at 10:20 PM, Madovsky wrote:
On 10/3/2015 4:48 AM, Michael Paquier wrote:
On Sat, Oct 3, 2015 at 8:09 PM, Madovsky wrote:
I would like to fix a issue I'm facing of with the version 9.4 streaming
replication.
is it possible t
On Sat, Oct 3, 2015 at 10:20 PM, Madovsky wrote:
>
>
> On 10/3/2015 4:48 AM, Michael Paquier wrote:
>>
>> On Sat, Oct 3, 2015 at 8:09 PM, Madovsky wrote:
>>>
>>> I would like to fix a issue I'm facing of with the version 9.4 streaming
>>> replication.
>>> is it possible to set on the fly the sync
On 10/3/2015 4:48 AM, Michael Paquier wrote:
On Sat, Oct 3, 2015 at 8:09 PM, Madovsky wrote:
I would like to fix a issue I'm facing of with the version 9.4 streaming
replication.
is it possible to set on the fly the synchronous commit on the master (or
standby?)
which only sync commit the hot
Hi
On 10/3/2015 5:46 AM, Edson Richter wrote:
Madovsky escreveu
> Hi,
>
> I would like to fix a issue I'm facing of with the version 9.4
streaming
> replication.
> is it possible to set on the fly the synchronous commit on the master
> (or standby?)
> which only sync commit the ho
Madovsky escreveu
> Hi,
>
> I would like to fix a issue I'm facing of with the version 9.4 streaming
> replication.
> is it possible to set on the fly the synchronous commit on the master
> (or standby?)
> which only sync commit the hot standby node used by the client who has a
> r
On Sat, Oct 3, 2015 at 8:09 PM, Madovsky wrote:
> I would like to fix a issue I'm facing of with the version 9.4 streaming
> replication.
> is it possible to set on the fly the synchronous commit on the master (or
> standby?)
> which only sync commit the hot standby node used by the client who has
Hi,
I would like to fix a issue I'm facing of with the version 9.4 streaming
replication.
is it possible to set on the fly the synchronous commit on the master
(or standby?)
which only sync commit the hot standby node used by the client who has a
read only sql session on?
example:
node1 node2
On Mon, Mar 02, 2015 at 04:06:02PM PDT, Adrian Klaver wrote:
> On 03/02/2015 03:25 PM, David Kerr wrote:
> >Howdy,
> >
> >I had an instance where a replica fell out of sync with the master.
> >
> >Now it's in in a state where it's unable to catch up because the master has
> >already removed the WA
On 03/02/2015 03:25 PM, David Kerr wrote:
Howdy,
I had an instance where a replica fell out of sync with the master.
Now it's in in a state where it's unable to catch up because the master has
already removed the WAL segment.
(logs)
Mar 2 23:10:13 db13 postgres[11099]: [3-1] user=,db=,host=
On Mon, Mar 02, 2015 at 03:33:22PM PDT, Joshua D. Drake wrote:
>
> On 03/02/2015 03:25 PM, David Kerr wrote:
> >
> >Howdy,
> >
> >I had an instance where a replica fell out of sync with the master.
> >
> >Now it's in in a state where it's unable to catch up because the master has
> >already remov
On 03/02/2015 03:25 PM, David Kerr wrote:
Howdy,
I had an instance where a replica fell out of sync with the master.
Now it's in in a state where it's unable to catch up because the master has
already removed the WAL segment.
(logs)
Mar 2 23:10:13 db13 postgres[11099]: [3-1] user=,db=,host
Howdy,
I had an instance where a replica fell out of sync with the master.
Now it's in in a state where it's unable to catch up because the master has
already removed the WAL segment.
(logs)
Mar 2 23:10:13 db13 postgres[11099]: [3-1] user=,db=,host= LOG: streaming
replication successfully co
On 05-01-2015 10:02, Michael Paquier wrote:
On Mon, Jan 5, 2015 at 6:51 PM, Edson Carlos Ericksson Richter
wrote:
Would this kind of count being recorded somewhere else?
How does the server knows that the wal_segments have been exhausted?
You should evaluate the amount of wal_keep_segments nec
On Mon, Jan 5, 2015 at 6:51 PM, Edson Carlos Ericksson Richter
wrote:
> Would this kind of count being recorded somewhere else?
> How does the server knows that the wal_segments have been exhausted?
You should evaluate the amount of wal_keep_segments necessary using
the replication lag in terms of
On 05-01-2015 02:08, Michael Paquier wrote:
On Sun, Jan 4, 2015 at 1:48 AM, Edson Carlos Ericksson Richter
wrote:
How to query current segments allocation relative to "Wal keep segments" in
each master server?
What is your server version? You can have a look at
pg_stat_replication on the mast
On Sun, Jan 4, 2015 at 1:48 AM, Edson Carlos Ericksson Richter
wrote:
> How to query current segments allocation relative to "Wal keep segments" in
> each master server?
What is your server version? You can have a look at
pg_stat_replication on the master which contains information about the
WAL
I'm maintaining async replication (streaming) between four database
servers arranged on 2 x 2.
How to query current segments allocation relative to "Wal keep
segments" in each master server?
I want to add this query to Postbix in order to monitor if the "wal keep
segments" parameter is too shor
Hi all;
we have a client running PostgreSQL on windows, and they want to run
streaming replication with some sort of failover.
We have streaming replication in place, we thought we could use
pgbouncer and in the case of the master being down our heartbeat script
would reload the pgbouncer co
On 3/11/2014 5:50 AM, Aggarwal, Ajay wrote:
Thats exactly what I was thinking after all other experiments. Couple
of questions:
1) why did you say that 300 seconds is the upper limit? Is this
enforced by Postgres? What if I want to set it to 10 minutes?
2) whats the downside of bigger replicati
From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org]
on behalf of John R Pierce [pie...@hogranch.com]
Sent: Monday, March 10, 2014 9:58 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] replication timeout in pg_basebackup
On 3/9/2014 6:52 PM
On 3/9/2014 6:52 PM, Aggarwal, Ajay wrote:
Our replication timeout is default 60 seconds. If we increase the
replication time to say 180 seconds, we see better results but backups
still fail occasionally.
so increase it to 300 seconds, or whatever. thats an upper limit, it
needs to be big e
t: RE: [GENERAL] replication timeout in pg_basebackup
I have already tried experimenting with linux dirty_ratio etc. You can only
fine tune up to a limit. The backup process still fills up the buffer cache
very quickly. Yes, my database is about 5-6 GB in size and will grow bigger
over time.
If
force it to use direct
I/O.
From: Haribabu Kommi [kommi.harib...@gmail.com]
Sent: Monday, March 10, 2014 8:31 PM
To: Aggarwal, Ajay
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] replication timeout in pg_basebackup
On Tue, Mar 11, 2014 at 7:07 AM
On Tue, Mar 11, 2014 at 7:07 AM, Aggarwal, Ajay wrote:
> Thanks Hari Babu.
>
> I think what is happening is that my dirty cache builds up quickly for the
> volume where I am backing up. This would trigger flush of these dirty pages
> to the disk. While this flush is going on pg_basebackup tries to
207.248 ops/sec
Non-Sync'ed 8kB writes:
write 202216.900 ops/sec
From: Haribabu Kommi [kommi.harib...@gmail.com]
Sent: Monday, March 10, 2014 1:42 AM
To: Aggarwal, Ajay
Cc: pgsql-general@postgresql.org
Subject: Re:
On Mon, Mar 10, 2014 at 12:52 PM, Aggarwal, Ajay wrote:
> Our environment: Postgres version 9.2.2 running on CentOS 6.4
>
> Our backups using pg_basebackup are frequently failing with following error
>
> "pg_basebackup: could not send feedback packet: server closed the connection
> unexpectedly
Our environment: Postgres version 9.2.2 running on CentOS 6.4
Our backups using pg_basebackup are frequently failing with following error
"pg_basebackup: could not send feedback packet: server closed the connection
unexpectedly
This probably means the server terminated abnormally
Sergey Konoplev wrote:
> On Mon, Dec 30, 2013 at 12:27 AM, Albe Laurenz
> wrote:
>> Joe Van Dyk wrote:
>>> If I run "COPY (select * from complicate_view) to stdout" on the standby,
>>> I've noticed that sometimes
>>> halts replication updates to the slave.
>>>
>>> For example, that's happening r
On Mon, Dec 30, 2013 at 10:05 PM, Joe Van Dyk wrote:
>> I meant all the replication settings, see [1]. And pg_stat_statements
>> when there is a problem, preferable the error, because when everything
>> is okay it is not very useful actually.
>
> I don't understand, how is pg_stat_statements helpf
On Mon, Dec 30, 2013 at 9:11 PM, Sergey Konoplev wrote:
> On Mon, Dec 30, 2013 at 8:56 PM, Joe Van Dyk wrote:
> > On Mon, Dec 30, 2013 at 10:27 AM, Sergey Konoplev
> wrote:
> >> On Mon, Dec 30, 2013 at 12:02 AM, Joe Van Dyk wrote:
> >> > On Sun, Dec 29, 2013 at 10:52 PM, Sergey Konoplev
> >>
On Wed, Dec 18, 2013 at 1:51 PM, Adrian Klaver wrote:
> On 12/18/2013 12:15 PM, Joe Van Dyk wrote:
>>
>> A possibly related question:
>>
>> I've set wal_keep_segments to 10,000 and also have archive_command
>> running wal-e. I'm seeing my wal files disappear from pg_xlog after 30
>> minutes. Is th
On Mon, Dec 30, 2013 at 8:56 PM, Joe Van Dyk wrote:
> On Mon, Dec 30, 2013 at 10:27 AM, Sergey Konoplev wrote:
>> On Mon, Dec 30, 2013 at 12:02 AM, Joe Van Dyk wrote:
>> > On Sun, Dec 29, 2013 at 10:52 PM, Sergey Konoplev
>> > wrote:
>> >> On Sun, Dec 29, 2013 at 9:56 PM, Joe Van Dyk wrote:
>>
On Mon, Dec 30, 2013 at 10:27 AM, Sergey Konoplev wrote:
> On Mon, Dec 30, 2013 at 12:02 AM, Joe Van Dyk wrote:
> > On Sun, Dec 29, 2013 at 10:52 PM, Sergey Konoplev
> wrote:
> >> On Sun, Dec 29, 2013 at 9:56 PM, Joe Van Dyk wrote:
> >> > On Wed, Dec 18, 2013 at 3:39 PM, Sergey Konoplev
> >>
On Mon, Dec 9, 2013 at 3:13 PM, Dmitry Koterov wrote:
> Is there a way to compress the traffic between master and slave during the
> replication?.. The streaming gzip would be quite efficient for that.
Take a look at the ssh_tunnel.sh [1] tool. This is a wrapper around
SSH tunnel with compression
On Mon, Dec 30, 2013 at 12:27 AM, Albe Laurenz wrote:
> Joe Van Dyk wrote:
>> If I run "COPY (select * from complicate_view) to stdout" on the standby,
>> I've noticed that sometimes
>> halts replication updates to the slave.
>>
>> For example, that's happening right now and "now() -
>> pg_last_
Joe Van Dyk wrote:
> If I run "COPY (select * from complicate_view) to stdout" on the standby,
> I've noticed that sometimes
> halts replication updates to the slave.
>
> For example, that's happening right now and "now() -
> pg_last_xact_replay_timestamp()" is 22 minutes.
> There's many transac
On Sun, Dec 29, 2013 at 9:56 PM, Joe Van Dyk wrote:
> On Wed, Dec 18, 2013 at 3:39 PM, Sergey Konoplev wrote:
>>
>> On Wed, Dec 18, 2013 at 11:26 AM, Joe Van Dyk wrote:
>> > I'm running Postgresql 9.3. I have a streaming replication server.
>> > Someone
>> > was running a long COPY query (8 hour
On Wed, Dec 18, 2013 at 3:39 PM, Sergey Konoplev wrote:
> On Wed, Dec 18, 2013 at 11:26 AM, Joe Van Dyk wrote:
> > I'm running Postgresql 9.3. I have a streaming replication server.
> Someone
> > was running a long COPY query (8 hours) on the standby which halted
> > replication. The replication
On Wed, Dec 18, 2013 at 11:26 AM, Joe Van Dyk wrote:
> I'm running Postgresql 9.3. I have a streaming replication server. Someone
> was running a long COPY query (8 hours) on the standby which halted
> replication. The replication stopped at 3:30 am. I canceled the long-running
> query at 9:30 am
On 12/18/2013 12:15 PM, Joe Van Dyk wrote:
A possibly related question:
I've set wal_keep_segments to 10,000 and also have archive_command
running wal-e. I'm seeing my wal files disappear from pg_xlog after 30
minutes. Is that expected? Is there a way around that?
Well a WAL segment is 16MB in
Joe Van Dyk writes:
> I'm running Postgresql 9.3. I have a streaming replication server. Someone
> was running a long COPY query (8 hours) on the standby which halted
> replication. The
> replication stopped at 3:30 am. I canceled the long-running query at 9:30 am
> and replication data starte
A possibly related question:
I've set wal_keep_segments to 10,000 and also have archive_command running
wal-e. I'm seeing my wal files disappear from pg_xlog after 30 minutes. Is
that expected? Is there a way around that?
(I want to use streaming replication and wal-e for PITR restores)
On Wed,
I'm running Postgresql 9.3. I have a streaming replication server. Someone
was running a long COPY query (8 hours) on the standby which halted
replication. The replication stopped at 3:30 am. I canceled the
long-running query at 9:30 am and replication data started catching up.
The data up until 1
Hello,
Yes gzip compression can be used for compressing WAL traffic during
streaming replication Following tools can be used in this regard.
SSL compression-SSL support is built in PostgreSQL. You need to ensure you
have OpenSSL library support in your PostgreSQL installation.
Also, you can compr
Hello,
Yes, gzip compression can be used for compressing WAL traffic during
streaming replication Following tools can be used in this regard.
SSL compression-SSL support is built in PostgreSQL. You need to ensure
you have OpenSSL library support in your PostgreSQL installation.
Also, you can comp
On 12/9/2013 3:13 PM, Dmitry Koterov wrote:
Is there a way to compress the traffic between master and slave during
the replication?.. The streaming gzip would be quite efficient for that.
(WAL archiving is not too good for this purpose because of high lag. I
just need to minimize the cross-dat
On Tue, Dec 10, 2013 at 8:13 AM, Dmitry Koterov wrote:
> Hello.
>
> Is there a way to compress the traffic between master and slave during the
> replication?.. The streaming gzip would be quite efficient for that.
>
> (WAL archiving is not too good for this purpose because of high lag. I just
> ne
Hello.
Is there a way to compress the traffic between master and slave during the
replication?.. The streaming gzip would be quite efficient for that.
(WAL archiving is not too good for this purpose because of high lag. I just
need to minimize the cross-datacenter traffic keeping the replication
1 - 100 of 600 matches
Mail list logo