Heikki Linnakangas writes:
> On 03/07/10 18:32, Tom Lane wrote:
>> That would not do what you want at all in the case where you're
>> recovering from archive --- XLogReceiptTime would never advance
>> at all for the duration of the recovery.
> Do you mean when using something like pg_standby, whi
On 03/07/10 18:32, Tom Lane wrote:
Heikki Linnakangas writes:
It would seem logical to use the same logic for archive recovery as we
do for streaming replication, and only set XLogReceiptTime when you have
to wait for a WAL segment to arrive into the archive, ie. when
restore_command fails.
T
Heikki Linnakangas writes:
> It would seem logical to use the same logic for archive recovery as we
> do for streaming replication, and only set XLogReceiptTime when you have
> to wait for a WAL segment to arrive into the archive, ie. when
> restore_command fails.
That would not do what you wa
On 02/07/10 23:36, Tom Lane wrote:
Robert Haas writes:
I haven't been able to wrap my head around why the delay should be
LESS in the archive case than in the streaming case. Can you attempt
to hit me with the clue-by-four?
In the archive case, you're presumably trying to catch up, and so it
On Fri, Jul 2, 2010 at 4:52 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Fri, Jul 2, 2010 at 4:36 PM, Tom Lane wrote:
>>> In the archive case, you're presumably trying to catch up, and so it
>>> makes sense to kill queries faster so you can catch up.
>
>> On the flip side, the timeout for the
Robert Haas writes:
> On Fri, Jul 2, 2010 at 4:36 PM, Tom Lane wrote:
>> In the archive case, you're presumably trying to catch up, and so it
>> makes sense to kill queries faster so you can catch up.
> On the flip side, the timeout for the WAL segment is for 16MB of WAL,
> whereas the timeout f
On Fri, Jul 2, 2010 at 4:36 PM, Tom Lane wrote:
> Robert Haas writes:
>> I haven't been able to wrap my head around why the delay should be
>> LESS in the archive case than in the streaming case. Can you attempt
>> to hit me with the clue-by-four?
>
> In the archive case, you're presumably tryin
Robert Haas writes:
> I haven't been able to wrap my head around why the delay should be
> LESS in the archive case than in the streaming case. Can you attempt
> to hit me with the clue-by-four?
In the archive case, you're presumably trying to catch up, and so it
makes sense to kill queries fast
On Fri, Jul 2, 2010 at 4:11 PM, Tom Lane wrote:
> [ Apologies for the very slow turnaround on this --- I got hit with
> another batch of non-postgres security issues this week. ]
>
> Attached is a draft patch for revising the max_standby_delay behavior into
> something I think is a bit saner. The
[ Apologies for the very slow turnaround on this --- I got hit with
another batch of non-postgres security issues this week. ]
Attached is a draft patch for revising the max_standby_delay behavior into
something I think is a bit saner. There is some unfinished business:
* I haven't touched the d
Simon Riggs wrote:
> On Mon, 2010-06-28 at 10:09 -0700, Josh Berkus wrote:
> > > It will get done. It is not the very first thing on my to-do list.
> >
> > ??? What is then?
> >
> > If it's not the first thing on your priority list, with 9.0 getting
> > later by the day, maybe we should leave
On Mon, 2010-06-28 at 10:09 -0700, Josh Berkus wrote:
> > It will get done. It is not the very first thing on my to-do list.
>
> ??? What is then?
>
> If it's not the first thing on your priority list, with 9.0 getting
> later by the day, maybe we should leave it to Robert and Simon, who *do*
On Mon, Jun 28, 2010 at 2:26 PM, Tom Lane wrote:
> Robert Haas writes:
>> ... It is even more unreasonable to commit to
>> providing a timely patch (twice) and then fail to do so. We are
>> trying to finalize a release here, and you've made it clear you think
>> this code needs revision before t
Robert Haas writes:
> ... It is even more unreasonable to commit to
> providing a timely patch (twice) and then fail to do so. We are
> trying to finalize a release here, and you've made it clear you think
> this code needs revision before then. I respect your opinion, but not
> your right to ma
On Mon, Jun 28, 2010 at 10:19 AM, Tom Lane wrote:
> Simon Riggs writes:
>> On Wed, 2010-06-16 at 21:56 -0400, Tom Lane wrote:
>>> Sorry, I've been a bit distracted by other responsibilities (libtiff
>>> security issues for Red Hat, if you must know). I'll get on it shortly.
>
>> I don't think th
It will get done. It is not the very first thing on my to-do list.
??? What is then?
If it's not the first thing on your priority list, with 9.0 getting
later by the day, maybe we should leave it to Robert and Simon, who *do*
seem to have it first on *their* list?
I swear, when Simon wa
Simon Riggs writes:
> On Wed, 2010-06-16 at 21:56 -0400, Tom Lane wrote:
>> Sorry, I've been a bit distracted by other responsibilities (libtiff
>> security issues for Red Hat, if you must know). I'll get on it shortly.
> I don't think the PostgreSQL project should wait any longer on this. If
>
On Mon, Jun 28, 2010 at 3:17 AM, Simon Riggs wrote:
> On Wed, 2010-06-16 at 21:56 -0400, Tom Lane wrote:
>> Robert Haas writes:
>> > On Wed, Jun 9, 2010 at 8:01 PM, Tom Lane wrote:
>> >> Yes, I'll get with it ...
>>
>> > Any update on this?
>>
>> Sorry, I've been a bit distracted by other respon
On Wed, 2010-06-16 at 21:56 -0400, Tom Lane wrote:
> Robert Haas writes:
> > On Wed, Jun 9, 2010 at 8:01 PM, Tom Lane wrote:
> >> Yes, I'll get with it ...
>
> > Any update on this?
>
> Sorry, I've been a bit distracted by other responsibilities (libtiff
> security issues for Red Hat, if you mu
On Mon, Jun 21, 2010 at 12:20 AM, Ron Mayer
wrote:
> Robert Haas wrote:
>> On Wed, Jun 16, 2010 at 9:56 PM, Tom Lane wrote:
>>> Sorry, I've been a bit distracted by other responsibilities (libtiff
>>> security issues for Red Hat, if you must know). I'll get on it shortly.
>>
>> What? You have o
Robert Haas wrote:
> On Wed, Jun 16, 2010 at 9:56 PM, Tom Lane wrote:
>> Sorry, I've been a bit distracted by other responsibilities (libtiff
>> security issues for Red Hat, if you must know). I'll get on it shortly.
>
> What? You have other things to do besides hack on PostgreSQL? Shocking!
On Wed, Jun 16, 2010 at 9:56 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Wed, Jun 9, 2010 at 8:01 PM, Tom Lane wrote:
>>> Yes, I'll get with it ...
>
>> Any update on this?
>
> Sorry, I've been a bit distracted by other responsibilities (libtiff
> security issues for Red Hat, if you must kno
Robert Haas writes:
> On Wed, Jun 9, 2010 at 8:01 PM, Tom Lane wrote:
>> Yes, I'll get with it ...
> Any update on this?
Sorry, I've been a bit distracted by other responsibilities (libtiff
security issues for Red Hat, if you must know). I'll get on it shortly.
regards
On Wed, Jun 9, 2010 at 8:01 PM, Tom Lane wrote:
> Simon Riggs writes:
>> On Thu, 2010-06-03 at 19:02 -0400, Tom Lane wrote:
>>> I decided there wasn't time to get anything useful done on it before the
>>> beta2 deadline (which is, more or less, right now). I will take another
>>> look over the n
On Thu, 2010-06-03 at 19:02 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > On Thu, 2010-06-03 at 18:18 +0100, Simon Riggs wrote:
> >> Are you planning to work on these things now as you said?
>
> > Are you? Or do you want me to?
>
> I decided there wasn't time to get anything useful done on it
Simon Riggs writes:
> On Thu, 2010-06-03 at 19:02 -0400, Tom Lane wrote:
>> I decided there wasn't time to get anything useful done on it before the
>> beta2 deadline (which is, more or less, right now). I will take another
>> look over the next few days.
> We all really need you to fix up max_s
Simon Riggs writes:
> On Thu, 2010-06-03 at 18:18 +0100, Simon Riggs wrote:
>> Are you planning to work on these things now as you said?
> Are you? Or do you want me to?
I decided there wasn't time to get anything useful done on it before the
beta2 deadline (which is, more or less, right now).
On Thu, 2010-06-03 at 18:18 +0100, Simon Riggs wrote:
> Are you planning to work on these things now as you said?
Are you? Or do you want me to?
--
Simon Riggs www.2ndQuadrant.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscri
Greg Stark writes:
> On Thu, Jun 3, 2010 at 12:11 AM, Tom Lane wrote:
>> It is off-base. The receiver does not "request" data, the sender is
>> what determines how much WAL is sent when.
> Hm, so what happens if the slave blocks, doesn't the sender block when
> the kernel buffers fill up?
Well
On Thu, Jun 3, 2010 at 4:18 PM, Tom Lane wrote:
> Greg Stark writes:
>> On Thu, Jun 3, 2010 at 12:11 AM, Tom Lane wrote:
>>> It is off-base. The receiver does not "request" data, the sender is
>>> what determines how much WAL is sent when.
>
>> Hm, so what happens if the slave blocks, doesn't t
On Thu, 2010-06-03 at 13:32 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > On Thu, 2010-06-03 at 12:47 -0400, Tom Lane wrote:
> >> But in any case the current behavior is
> >> still quite broken as regards reading stale timestamps from WAL.
>
> > Agreed. That wasn't the objective of this patch
Simon Riggs writes:
> On Thu, 2010-06-03 at 12:47 -0400, Tom Lane wrote:
>> But in any case the current behavior is
>> still quite broken as regards reading stale timestamps from WAL.
> Agreed. That wasn't the objective of this patch or a priority.
> If you're reading from an archive, you need t
On Thu, 2010-06-03 at 12:47 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > On Wed, 2010-06-02 at 13:14 -0400, Tom Lane wrote:
> >> This patch seems to me to be going in fundamentally the wrong direction.
> >> It's adding complexity and overhead (far more than is needed), and it's
> >> failing ut
Simon Riggs writes:
> On Wed, 2010-06-02 at 16:00 -0400, Tom Lane wrote:
>> the current situation that query grace periods go to zero
> Possibly a better way to handle this concern is to make the second
> parameter: min_standby_grace_period - the minimum time a query will be
> given in which to e
Simon Riggs writes:
> On Wed, 2010-06-02 at 13:14 -0400, Tom Lane wrote:
>> This patch seems to me to be going in fundamentally the wrong direction.
>> It's adding complexity and overhead (far more than is needed), and it's
>> failing utterly to resolve the objections that I raised to start with.
On Thu, Jun 3, 2010 at 4:34 PM, Tom Lane wrote:
> The data keeps coming in and getting dumped into the slave's pg_xlog.
> walsender/walreceiver are not at all tied to the slave's application
> of WAL. In principle we could have the code around max_standby_delay
> notice just how far behind it's g
Greg Stark writes:
>> Well, if the slave can't keep up, that's a separate problem. It will
>> not fail to keep up as a result of the transmission mechanism.
> No, I mean if the slave is paused due to a conflict. Does it stop
> reading data from the master or does it buffer it up on disk? If it
>
On Thu, Jun 3, 2010 at 12:11 AM, Tom Lane wrote:
> Greg Stark writes:
>> I was assuming the walreceiver only requests more wal in relatively
>> small chunks and only when replay has caught up and needs more data. I
>> haven't actually read this code so if that assumption is wrong then
>> I'm off-
On Thu, Jun 3, 2010 at 5:00 AM, Tom Lane wrote:
>> I stand by my suggestion from yesterday: Let's define max_standby_delay
>> as the difference between a piece of WAL becoming available in the
>> standby, and applying it.
>
> My proposal is essentially the same as yours plus allowing the DBA to
>
On Thu, 2010-06-03 at 18:39 +0900, Fujii Masao wrote:
> What purpose would that serve?
Tom has already explained this and it makes sense for me.
--
Simon Riggs www.2ndQuadrant.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscri
On Thu, Jun 3, 2010 at 6:07 PM, Simon Riggs wrote:
> On Thu, 2010-06-03 at 17:56 +0900, Fujii Masao wrote:
>> On Thu, Jun 3, 2010 at 4:41 AM, Heikki Linnakangas
>> wrote:
>> > I don't understand why you want to use a different delay when you're
>> > restoring from archive vs. when you're streamin
On Thu, 2010-06-03 at 17:56 +0900, Fujii Masao wrote:
> On Thu, Jun 3, 2010 at 4:41 AM, Heikki Linnakangas
> wrote:
> > I don't understand why you want to use a different delay when you're
> > restoring from archive vs. when you're streaming (what about existing WAL
> > files found in pg_xlog, BTW
On Thu, Jun 3, 2010 at 4:41 AM, Heikki Linnakangas
wrote:
> I don't understand why you want to use a different delay when you're
> restoring from archive vs. when you're streaming (what about existing WAL
> files found in pg_xlog, BTW?). The source of WAL shouldn't make a
> difference.
Yes. The p
On Wed, 2010-06-02 at 16:00 -0400, Tom Lane wrote:
> the current situation that query grace periods go to zero
Possibly a better way to handle this concern is to make the second
parameter: min_standby_grace_period - the minimum time a query will be
given in which to execute, even if max_standby_de
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Greg Stark writes:
> > So I think this isn't necessarily such a blue moon event. As I
> > understand it, all it would take is a single long-running report and a
> > vacuum or HOT cleanup occurring on the master.
>
> I think this is mostly FUD too. How oft
Greg Stark writes:
> I was assuming the walreceiver only requests more wal in relatively
> small chunks and only when replay has caught up and needs more data. I
> haven't actually read this code so if that assumption is wrong then
> I'm off-base.
It is off-base. The receiver does not "request"
On Wed, Jun 2, 2010 at 8:14 PM, Tom Lane wrote:
> Indeed, but nothing we do can prevent that, if the slave is just plain
> slower than the master. You have to assume that the slave is capable of
> keeping up in the absence of query-caused delays, or you're hosed.
I was assuming the walreceiver o
Heikki Linnakangas writes:
> The problem with defining max_archive_delay that way is again that you
> can fall behind indefinitely.
See my response to Greg Stark --- I don't think this is really an
issue. It's certainly far less of an issue than the current situation
that query grace periods go
On 02/06/10 20:14, Tom Lane wrote:
For realistic values of max_standby_delay ...
Hang on right there. What do you consider a realistic value for
max_standby_delay? Because I'm not sure I have a grip on that myself. 5
seconds? 5 minutes? 5 hours? I can see use cases for all of those...
Wha
Greg Stark writes:
> On Wed, Jun 2, 2010 at 6:14 PM, Tom Lane wrote:
>> I believe that the motivation for treating archived timestamps as live
>> is, essentially, to force rapid catchup if a slave falls behind so far
>> that it's reading from archive instead of SR.
> Huh, I think this is the fir
On Wed, Jun 2, 2010 at 2:27 PM, Simon Riggs wrote:
> Syncing two servers in replication is common practice, as has been
> explained here; I'm still surprised people think otherwise. Measuring
> the time between two servers is the very purpose of the patch, so the
> synchronisation is not a design
On Wed, 2010-06-02 at 13:14 -0400, Tom Lane wrote:
> This patch seems to me to be going in fundamentally the wrong direction.
> It's adding complexity and overhead (far more than is needed), and it's
> failing utterly to resolve the objections that I raised to start with.
Having read your proposa
On Wed, Jun 2, 2010 at 6:14 PM, Tom Lane wrote:
> I believe that the motivation for treating archived timestamps as live
> is, essentially, to force rapid catchup if a slave falls behind so far
> that it's reading from archive instead of SR.
Huh, I think this is the first mention of this that I
On Wed, Jun 2, 2010 at 2:03 PM, Simon Riggs wrote:
> On Wed, 2010-06-02 at 13:45 -0400, Tom Lane wrote:
>> Stephen Frost writes:
>> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> >> Comments?
>>
>> > I'm not really a huge fan of adding another GUC, to be honest. I'm more
>> > inclined to say we tre
On Wed, 2010-06-02 at 13:45 -0400, Tom Lane wrote:
> Stephen Frost writes:
> > * Tom Lane (t...@sss.pgh.pa.us) wrote:
> >> Comments?
>
> > I'm not really a huge fan of adding another GUC, to be honest. I'm more
> > inclined to say we treat 'max_archive_delay' as '0', and turn
> > max_streaming_d
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Comments?
> I'm not really a huge fan of adding another GUC, to be honest. I'm more
> inclined to say we treat 'max_archive_delay' as '0', and turn
> max_streaming_delay into what you've described. If we fall back so far
> that w
Tom Lane wrote:
I'm still inclined to apply the part of Simon's patch that adds a
transmit timestamp to each SR send chunk. That would actually be
completely unused by the slave given my proposal above, but I think that
it is an important step to take to future-proof the SR protocol against
po
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> An important property of this design is that all relevant timestamps
> are taken on the slave, so clock skew isn't an issue.
I agree that this is important, and I do run NTP on all my servers and
even monitor it using Nagios.
It's still not a cure-all for
Simon Riggs writes:
> OK, here's v4.
I've been trying to stay out of this thread, but with beta2 approaching
and no resolution in sight, I'm afraid I have to get involved.
This patch seems to me to be going in fundamentally the wrong direction.
It's adding complexity and overhead (far more than
Simon Riggs wrote:
> On Mon, 2010-05-31 at 14:40 -0400, Bruce Momjian wrote:
>
>> Uh, we have three days before we package 9.0beta2. It would be
>> good if we could decide on the max_standby_delay issue soon.
>
> I've heard something from Heikki, not from anyone else. Those
> comments amount to
On Mon, 2010-05-31 at 14:40 -0400, Bruce Momjian wrote:
> Uh, we have three days before we package 9.0beta2. It would be good if
> we could decide on the max_standby_delay issue soon.
I've heard something from Heikki, not from anyone else. Those comments
amount to "lets replace max_standby_delay
Thanks for the review.
On Tue, 2010-06-01 at 13:36 +0300, Heikki Linnakangas wrote:
> If we really want to try to salvage max_standby_delay with a meaning
> similar to what it has now, I think we should go with the idea some
> people bashed around earlier and define the grace period as the
> d
On 27/05/10 20:26, Simon Riggs wrote:
On Wed, 2010-05-26 at 16:22 -0700, Josh Berkus wrote:
Just this second posted about that, as it turns out.
I have a v3 *almost* ready of the keepalive patch. It still makes sense
to me after a few days reflection, so is worth discussion and review. In
or ou
Uh, we have three days before we package 9.0beta2. It would be good if
we could decide on the max_standby_delay issue soon.
---
Simon Riggs wrote:
> On Wed, 2010-05-26 at 16:22 -0700, Josh Berkus wrote:
> > > Just this seco
On Wed, 2010-05-26 at 16:22 -0700, Josh Berkus wrote:
> > Just this second posted about that, as it turns out.
> >
> > I have a v3 *almost* ready of the keepalive patch. It still makes sense
> > to me after a few days reflection, so is worth discussion and review. In
> > or out, I want this settle
> Just this second posted about that, as it turns out.
>
> I have a v3 *almost* ready of the keepalive patch. It still makes sense
> to me after a few days reflection, so is worth discussion and review. In
> or out, I want this settled within a week. Definitely need some R&R
> here.
Does the kee
On Wed, 2010-05-26 at 15:45 -0700, Josh Berkus wrote:
> > Committed with chunk size of 128 kB. I hope that's a reasonable
> > compromise, in the absence of any performance test data either way.
>
> So where are we with max_standby_delay? Status check?
Just this second posted about that, as it tu
On Sun, 2010-05-16 at 17:11 +0100, Simon Riggs wrote:
> New version, with some other cleanup of wait processing.
>
> New logic is that when Startup asks for next applychunk of WAL it saves
> the lastChunkTimestamp. That is then the base time used by
> WaitExceedsMaxStandbyDelay(), except when lat
On Thu, 2010-05-27 at 01:34 +0300, Heikki Linnakangas wrote:
> Committed with chunk size of 128 kB.
Thanks. I'm sure that's fine.
--
Simon Riggs www.2ndQuadrant.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://w
> Committed with chunk size of 128 kB. I hope that's a reasonable
> compromise, in the absence of any performance test data either way.
So where are we with max_standby_delay? Status check?
--
-- Josh Berkus
PostgreSQL Expe
On 19/05/10 00:37, Simon Riggs wrote:
On Tue, 2010-05-18 at 17:25 -0400, Heikki Linnakangas wrote:
On 18/05/10 17:17, Simon Riggs wrote:
There's no reason that the buffer size we use for XLogRead() should be
the same as the send buffer, if you're worried about that. My point is
that pq_putmessa
On Tue, 2010-05-18 at 17:25 -0400, Heikki Linnakangas wrote:
> On 18/05/10 17:17, Simon Riggs wrote:
> > There's no reason that the buffer size we use for XLogRead() should be
> > the same as the send buffer, if you're worried about that. My point is
> > that pq_putmessage contains internal flushes
On 18/05/10 17:17, Simon Riggs wrote:
There's no reason that the buffer size we use for XLogRead() should be
the same as the send buffer, if you're worried about that. My point is
that pq_putmessage contains internal flushes so at the libpq level you
gain nothing by big batches. The read() will b
On Tue, 2010-05-18 at 17:08 -0400, Heikki Linnakangas wrote:
> On 17/05/10 12:36, Jim Nasby wrote:
> > On May 15, 2010, at 12:05 PM, Heikki Linnakangas wrote:
> >> What exactly is the user trying to monitor? If it's "how far behind is
> >> the standby", the difference between pg_current_xlog_insert
On Tue, 2010-05-18 at 17:06 -0400, Heikki Linnakangas wrote:
> On 17/05/10 04:40, Simon Riggs wrote:
> > On Sun, 2010-05-16 at 16:53 +0100, Simon Riggs wrote:
> >>>
> >>> Attached patch rearranges the walsender loops slightly to fix the above.
> >>> XLogSend() now only sends up to MAX_SEND_SIZE byt
On 17/05/10 12:36, Jim Nasby wrote:
On May 15, 2010, at 12:05 PM, Heikki Linnakangas wrote:
What exactly is the user trying to monitor? If it's "how far behind is
the standby", the difference between pg_current_xlog_insert_location()
in the master and pg_last_xlog_replay_location() in the standb
On 17/05/10 04:40, Simon Riggs wrote:
On Sun, 2010-05-16 at 16:53 +0100, Simon Riggs wrote:
Attached patch rearranges the walsender loops slightly to fix the above.
XLogSend() now only sends up to MAX_SEND_SIZE bytes (== XLOG_SEG_SIZE /
2) in one round and returns to the main loop after that ev
On May 15, 2010, at 12:05 PM, Heikki Linnakangas wrote:
> What exactly is the user trying to monitor? If it's "how far behind is
> the standby", the difference between pg_current_xlog_insert_location()
> in the master and pg_last_xlog_replay_location() in the standby seems
> more robust and well-de
On Sun, 2010-05-16 at 16:53 +0100, Simon Riggs wrote:
> >
> > Attached patch rearranges the walsender loops slightly to fix the above.
> > XLogSend() now only sends up to MAX_SEND_SIZE bytes (== XLOG_SEG_SIZE /
> > 2) in one round and returns to the main loop after that even if there's
> > unsent
On Mon, 2010-05-17 at 11:51 +0900, Fujii Masao wrote:
> Is it OK that this keepalive message cannot be used by HS in file-based
> log-shipping? Even in SR, the startup process cannot use the keepalive
> until walreceiver has been started up.
Yes, I see those things.
We already have archive_time
On Mon, May 17, 2010 at 1:11 AM, Simon Riggs wrote:
> On Sat, 2010-05-15 at 19:50 +0100, Simon Riggs wrote:
>> On Sat, 2010-05-15 at 18:24 +0100, Simon Riggs wrote:
>>
>> > I will recode using that concept.
>
>> Startup gets new pointer when it runs out of data to replay. That might
>> or might no
On Sun, May 16, 2010 at 6:05 AM, Heikki Linnakangas
wrote:
> Heikki Linnakangas wrote:
>> Simon Riggs wrote:
>>> WALSender sleeps even when it might have more WAL to send, it doesn't
>>> check it just unconditionally sleeps. At least WALReceiver loops until
>>> it has no more to receive. I just ca
On Sat, 2010-05-15 at 19:50 +0100, Simon Riggs wrote:
> On Sat, 2010-05-15 at 18:24 +0100, Simon Riggs wrote:
>
> > I will recode using that concept.
> Startup gets new pointer when it runs out of data to replay. That might
> or might not include an updated keepalive timestamp, since there's no
>
On Sun, 2010-05-16 at 00:05 +0300, Heikki Linnakangas wrote:
> Heikki Linnakangas wrote:
> > Simon Riggs wrote:
> >> WALSender sleeps even when it might have more WAL to send, it doesn't
> >> check it just unconditionally sleeps. At least WALReceiver loops until
> >> it has no more to receive. I ju
Heikki Linnakangas wrote:
> Simon Riggs wrote:
>> WALSender sleeps even when it might have more WAL to send, it doesn't
>> check it just unconditionally sleeps. At least WALReceiver loops until
>> it has no more to receive. I just can't imagine why that's useful
>> behaviour.
>
> Good catch. That
Simon Riggs wrote:
> WALSender sleeps even when it might have more WAL to send, it doesn't
> check it just unconditionally sleeps. At least WALReceiver loops until
> it has no more to receive. I just can't imagine why that's useful
> behaviour.
Good catch. That should be fixed.
I also note that w
On Sat, 2010-05-15 at 18:24 +0100, Simon Riggs wrote:
> I will recode using that concept.
There's some behaviours that aren't helpful here:
Startup gets new pointer when it runs out of data to replay. That might
or might not include an updated keepalive timestamp, since there's no
exact relation
On Sat, 2010-05-15 at 20:05 +0300, Heikki Linnakangas wrote:
> Simon Riggs wrote:
> > On Sat, 2010-05-15 at 19:30 +0300, Heikki Linnakangas wrote:
> >> Doesn't feel right to me either. If you want to expose the
> >> keepalive-time to queries, it should be a separate field, something like
> >> lastM
Simon Riggs wrote:
> On Sat, 2010-05-15 at 19:30 +0300, Heikki Linnakangas wrote:
>> Doesn't feel right to me either. If you want to expose the
>> keepalive-time to queries, it should be a separate field, something like
>> lastMasterKeepaliveTime and a pg_last_master_keepalive() function to
>> read
On Sat, 2010-05-15 at 19:30 +0300, Heikki Linnakangas wrote:
> Simon Riggs wrote:
> > On Sat, 2010-05-15 at 11:45 -0400, Tom Lane wrote:
> >> I'm also extremely dubious that it's a good idea to set
> >> recoveryLastXTime from this. Using both that and the timestamps from
> >> the wal log flies in
Simon Riggs wrote:
> On Sat, 2010-05-15 at 11:45 -0400, Tom Lane wrote:
>> I'm also extremely dubious that it's a good idea to set
>> recoveryLastXTime from this. Using both that and the timestamps from
>> the wal log flies in the face of everything I remember about control
>> theory. We should b
On Sat, 2010-05-15 at 11:45 -0400, Tom Lane wrote:
> Simon Riggs writes:
> > Patch adds a keepalive message to ensure max_standby_delay is useful.
>
> The proposed placement of the keepalive-send is about the worst it could
> possibly be. It needs to be done right before pq_flush to ensure
> min
Simon Riggs writes:
> Patch adds a keepalive message to ensure max_standby_delay is useful.
The proposed placement of the keepalive-send is about the worst it could
possibly be. It needs to be done right before pq_flush to ensure
minimum transfer delay. Otherwise any attempt to measure clock sk
Patch adds a keepalive message to ensure max_standby_delay is useful.
No WAL format changes, no libpq changes. Just an additional message type
for the streaming replication protocol, sent once per main loop in
WALsender. Plus docs.
Comments?
--
Simon Riggs www.2ndQuadrant.com
diff -
94 matches
Mail list logo