On Fri, 14 Dec 2007, Zeugswetter Andreas ADI SD wrote:
I don't follow. The problem is not writes but reads. And if the reads
are random enough no cache controller will help.
The specific example Tom was running was, in his words, "100% disk write
bound". I was commenting on why I thought tha
Hannu Krosing wrote:
until N fubbers used
..whatever a fubber is :-)
Nice typo!
Markus
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Hello Hannu,
Hannu Krosing wrote:
(For parallelized queries, superuser privileges might appear wrong, but
I'm arguing that parallelizing the rights checking isn't worth the
trouble, so the initiating worker backend should do that and only
delegate safe jobs to hepler backends. Or is that a ser
Ühel kenal päeval, N, 2007-12-13 kell 20:25, kirjutas Heikki
Linnakangas:
...
> Hmm. That assumes that nothing else than the WAL replay will read
> pages into shared buffers. I guess that's true at the moment, but it
> doesn't seem impossible that something like Florian's read-only queries
> on
Ühel kenal päeval, R, 2007-12-14 kell 10:39, kirjutas Markus
Schiltknecht:
> Hi,
>
> (For parallelized queries, superuser privileges might appear wrong, but
> I'm arguing that parallelizing the rights checking isn't worth the
> trouble, so the initiating worker backend should do that and only
On Fri, 2007-12-14 at 10:51 +0100, Zeugswetter Andreas ADI SD wrote:
> The problem is not writes but reads.
That's what I see.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
---(end of broadcast)---
TIP 1: if posting/reading through
> > Note that even though the processor is 99% in wait state the drive
is
> > only handling about 3 MB/s. That translates into a seek time of
2.2ms
> > which is actually pretty fast...But note that if this were a raid
array
> > Postgres's wouldn't be getting any better results. A Raid array
wou
Hi,
Alvaro Herrera wrote:
Simon Riggs wrote:
ISTM its just autovacuum launcher + Hot Standby mixed.
I don't think you need a launcher at all. Just get the postmaster to
start a configurable number of wal-replay processes (currently the
number is hardcoded to 1).
I also see similarity to w
"Tom Lane" <[EMAIL PROTECTED]> writes:
> The argument that Heikki actually made was that multiple parallel
> queries could use more of the I/O bandwidth of a multi-disk array
> than recovery could. Which I believe, but I question how much of a
> real-world problem it is. For it to be an issue,
Josh Berkus <[EMAIL PROTECTED]> writes:
> Tom,
>> [ shrug... ] This is not consistent with my experience. I can't help
>> suspecting misconfiguration; perhaps shared_buffers much smaller on the
>> backup, for example.
> You're only going to see it on SMP systems which have a high degree of CPU
Tom,
> [ shrug... ] This is not consistent with my experience. I can't help
> suspecting misconfiguration; perhaps shared_buffers much smaller on the
> backup, for example.
You're only going to see it on SMP systems which have a high degree of CPU
utilization. That is, when you have 16 cores
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> It would be interesting to do something like that to speed up replay of long
> PITR archives, though. You could scan all (or at least far ahead) the WAL
> records, and make note of where there is full page writes for each page.
> Whenever there's
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Koichi showed me & Simon graphs of DBT-2 runs in their test lab back in
> May. They had setup two identical systems, one running the benchmark,
> and another one as a warm stand-by. The stand-by couldn't keep up; it
> couldn't replay the WAL as qu
Tom Lane wrote:
Also, I have not seen anyone provide a very credible argument why
we should spend a lot of effort on optimizing a part of the system
that is so little-exercised. Don't tell me about warm standby
systems --- they are fine as long as recovery is at least as fast
as the original tra
On Thu, 2007-12-13 at 16:41 -0500, Tom Lane wrote:
> Recovery is inherently one of the least-exercised parts of the system,
> and it gets more so with each robustness improvement we make elsewhere.
> Moreover, because it is fairly dumb, anything that does go wrong will
> likely result in silent da
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, 13 Dec 2007 11:12:26 -0800
"Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
> > > Hmm --- I was testing a straight crash-recovery scenario, not
> > > restoring from archive. Are you sure your restore_command script
> > > isn't responsible for a
On Thu, 13 Dec 2007, Gregory Stark wrote:
Note that even though the processor is 99% in wait state the drive is
only handling about 3 MB/s. That translates into a seek time of 2.2ms
which is actually pretty fast...But note that if this were a raid array
Postgres's wouldn't be getting any bette
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Hmm. That assumes that nothing else than the WAL replay will read
> pages into shared buffers. I guess that's true at the moment, but it
> doesn't seem impossible that something like Florian's read-only queries
> on a stand by server would change t
On Thu, 2007-12-13 at 21:13 +, Simon Riggs wrote:
> Of course if we scan that far ahead we can start removing aborted
> transactions also, which is the more standard optimization of
> recovery.
Recall that thought!
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
On Thu, 2007-12-13 at 20:25 +, Heikki Linnakangas wrote:
> Simon Riggs wrote:
> > Allocate a recovery cache of size maintenance_work_mem that goes away
> > when recovery ends.
> >
> > For every block mentioned in WAL record that isn't an overwrite, first
> > check shared_buffers. If its in sha
Simon Riggs wrote:
Allocate a recovery cache of size maintenance_work_mem that goes away
when recovery ends.
For every block mentioned in WAL record that isn't an overwrite, first
check shared_buffers. If its in shared_buffers apply immediately and
move on. If not in shared_buffers then put in r
On Thu, 2007-12-13 at 10:18 -0300, Alvaro Herrera wrote:
> Gregory Stark wrote:
> > "Simon Riggs" <[EMAIL PROTECTED]> writes:
> >
> > > It's a good idea, but it will require more complex code. I prefer the
> > > simpler solution of using more processes to solve the I/O problem.
> >
> > Huh, I for
On Thu, 2007-12-13 at 12:28 +, Heikki Linnakangas wrote:
> Gregory Stark wrote:
> > "Simon Riggs" <[EMAIL PROTECTED]> writes:
> >
> >> We would have readbuffers in shared memory, like wal_buffers in reverse.
> >> Each worker would read the next WAL record and check there is no
> >> conflict wi
Simon Riggs wrote:
> ISTM its just autovacuum launcher + Hot Standby mixed.
I don't think you need a launcher at all. Just get the postmaster to
start a configurable number of wal-replay processes (currently the
number is hardcoded to 1).
--
Alvaro Herrera http://www.amazon.com
Gregory Stark wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>
> > It's a good idea, but it will require more complex code. I prefer the
> > simpler solution of using more processes to solve the I/O problem.
>
> Huh, I forgot about that idea. Ironically that was what I suggested when
> Heikki
Simon,
On Dec 13, 2007 11:21 AM, Simon Riggs <[EMAIL PROTECTED]> wrote:
> Anyway, I'll leave this now, since I think we need to do Florian's work
> first either way and that is much more eagerly awaited I think.
Speaking of that, is there any news about it and about Florian? It was
a really promi
Gregory Stark wrote:
"Simon Riggs" <[EMAIL PROTECTED]> writes:
We would have readbuffers in shared memory, like wal_buffers in reverse.
Each worker would read the next WAL record and check there is no
conflict with other concurrent WAL records. If not, it will apply the
record immediately, othe
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> We would have readbuffers in shared memory, like wal_buffers in reverse.
> Each worker would read the next WAL record and check there is no
> conflict with other concurrent WAL records. If not, it will apply the
> record immediately, otherwise wait for
On Thu, 2007-12-13 at 09:45 +, Gregory Stark wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
>
> > On Thu, 2007-12-13 at 06:27 +, Gregory Stark wrote:
> >> Heikki proposed a while back to use posix_fadvise() when processing logs to
> >> read-ahead blocks which the recover will need befo
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Thu, 2007-12-13 at 06:27 +, Gregory Stark wrote:
>> Heikki proposed a while back to use posix_fadvise() when processing logs to
>> read-ahead blocks which the recover will need before actually attempting to
>> recover them. On a raid array that wo
On Thu, 2007-12-13 at 06:27 +, Gregory Stark wrote:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>
> > "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> >
> >> Exactly. Which is the point I am making. Five minutes of transactions
> >> is nothing (speaking generally).. In short, if we are in recovery
31 matches
Mail list logo