Josh Berkus <[EMAIL PROTECTED]> writes:
> Tom,
>> [ shrug... ]  This is not consistent with my experience.  I can't help
>> suspecting misconfiguration; perhaps shared_buffers much smaller on the
>> backup, for example.

> You're only going to see it on SMP systems which have a high degree of CPU 
> utilization.  That is, when you have 16 cores processing flat-out, then 
> the *single* core which will replay that log could certainly have trouble 
> keeping up.

You are supposing that replay takes as much CPU as live query
processing, which is nonsense (at least as long as we don't load it
down with a bunch of added complexity ;-)).

The argument that Heikki actually made was that multiple parallel
queries could use more of the I/O bandwidth of a multi-disk array
than recovery could.  Which I believe, but I question how much of a
real-world problem it is.  For it to be an issue, you'd need a workload
that is almost all updates (else recovery wins by not having to
replicate reads of pages that don't get modified) and the updates have
to range over a working set significantly larger than physical RAM
(else I/O bandwidth won't be the bottleneck anyway).  I think we're
talking about an extremely small population of real users.

                        regards, tom lane
3e

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to