Re: [HACKERS] bgwriter changes

2004-12-20 Thread Simon Riggs
On Thu, 2004-12-16 at 11:07, Neil Conway wrote:
> Zeugswetter Andreas DAZ SD wrote:
> > This has the disadvantage of converging against 0 dirty pages.
> > A system that has less than maxpages dirty will write every page with 
> > every bgwriter run.
> 
> Yeah, I'm concerned about the bgwriter being overly aggressive if we 
> disable bgwriter_percent. If we leave the settings as they are (delay = 
> 200, maxpages = 100, shared_buffers = 1000 by default), we will be 
> writing all the dirty pages to disk every 2 seconds, which seems far too 
> much.
> 
> It might also be good to reduce the delay, in order to more proactively 
> keep the LRUs clean (e.g. scanning to find N dirty pages once per second 
> is likely to reach father away from the LRU than scanning for N/M pages 
> once per 1/M seconds). On the other hand the more often the bgwriter 
> scans the buffer pool, the more times the BufMgrLock needs to be 
> acquired -- and in a system in which pages aren't being dirtied very 
> rapidly (or the dirtied pages tend to be very hot), each of those scans 
> is going to take a while to find enough dirty pages using #2. So perhaps 
> it is best to leave the delay as is for 8.0.

I think this is probably the right thing to do, since the majority of
users will have low/medium workloads, not the extremes of performance
that we have mainly been discussing.

> > This might have the disadvantage of either leaving too much for the 
> > checkpoint or writing too many dirty pages in one run. Is writing a lot 
> > in one run actually a problem though ? Or does the bgwriter pause
> > periodically while writing the pages of one run ?
> 
> The bgwriter does not pause between writing pages. What would be the 
> point of doing that? The kernel is going to be caching the write() anyway.
> 
> > If this is expressed in pages it would naturally need to be more than the 
> > current maxpages (to accomodate for clean pages). The suggested 2% sounded 
> > way too low for me (that leaves 98% to the checkpoint).
> 
> I agree this might be a problem, but it doesn't necessarily leave 98% to 
> be written at checkpoint: if the buffers in the LRU change over time, 
> the set of pages searched by the bgwriter will also change. 

Agreed.

> I'm not sure 
> how quickly the pages near the LRU change in a "typical workload"; 
> moreover, I think this would vary between different workloads.

Yes, clearly we need to be able to change the parameters according to
the workloadand long term have them vary as needs change.

My concern at the moment is that the bgwriter_delay looks to me like it
needs to be set lower for busier workloads, yet that is not possible
because of the contention for the BufMgrLock. Investigating optimal
parameter settings isn't possible while this contention exists.

Incidentally, setting debug_shared_buffers also causes some contention
which I'll look at reducing for 8.1, so it can be be used more
frequently as a log_ setting.

-- 
Best Regards, Simon Riggs


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] bgwriter changes

2004-12-20 Thread Simon Riggs
On Mon, 2004-12-20 at 01:17, Mark Kirkwood wrote:
> Mark Kirkwood wrote:
> 
> > It occurs to me that cranking up the number of transactions (say 
> > 1000->10) and seeing if said regression persists would be 
> > interesting.  This would give the smoothing effect of the bgwriter 
> > (plus the ARC) a better chance to shine. 
> 
> I ran a few of these over the weekend - since it rained here :-) , and 
> the results are quite interesting:
> 
> [2xPIII, 2G, 2xATA RAID 0, FreeBSD 5.3 with the same non default Pg 
> parameters as before]
> 
> clients = 4 transactions = 10 (/client), each test run twice
> 
> Version tps
> 7.4.6   49
> 8.0.0.0RC1  50
> 8.0.0.0RC1 + rem49
> 8.0.0.0RC1 + bg250
> 
> Needless to way, all well within measurement error of each other (the 
> variability was about 1).
> 
> I suspect that my previous tests had too few transactions to trigger 
> many (or any) checkpoints. With them now occurring in the test, they 
> look to be the most significant factor (contrast with 70-80 tps for 4 
> clients with 1000 transactions).
> 
> Also with a small number of transactions, the fsyn'ed blocks may have 
> all fitted in the ATA disk caches (2x2M). In hindsight I should have 
> disabled this! (might run the smaller no. transactions again with 
> hw.ata.wc=0 and see if this is enlightening)

These test results do seem to have greatly reduced variability: thanks.

>From what you say, this means parameter setting were: (?)
shared_buffers = 1
bgwriter_delay = 200
bgwriter_maxpages = 100

My interpretation of this is that the bgwriter is not effective with
these (the default) parameter settings. 

I think the optimum performance is by reducing both bgwriter_delay and
bgwriter_maxpages, though reducing the delay isn't sensibly possible
with 8.0RCn when shared_buffers is large.

-- 
Best Regards, Simon Riggs


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] bgwriter changes

2004-12-19 Thread Mark Kirkwood
Mark Kirkwood wrote:
It occurs to me that cranking up the number of transactions (say 
1000->10) and seeing if said regression persists would be 
interesting.  This would give the smoothing effect of the bgwriter 
(plus the ARC) a better chance to shine. 
I ran a few of these over the weekend - since it rained here :-) , and 
the results are quite interesting:

[2xPIII, 2G, 2xATA RAID 0, FreeBSD 5.3 with the same non default Pg 
parameters as before]

clients = 4 transactions = 10 (/client), each test run twice
Version tps
7.4.6   49
8.0.0.0RC1  50
8.0.0.0RC1 + rem49
8.0.0.0RC1 + bg250
Needless to way, all well within measurement error of each other (the 
variability was about 1).

I suspect that my previous tests had too few transactions to trigger 
many (or any) checkpoints. With them now occurring in the test, they 
look to be the most significant factor (contrast with 70-80 tps for 4 
clients with 1000 transactions).

Also with a small number of transactions, the fsyn'ed blocks may have 
all fitted in the ATA disk caches (2x2M). In hindsight I should have 
disabled this! (might run the smaller no. transactions again with 
hw.ata.wc=0 and see if this is enlightening)

regards
Mark
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] bgwriter changes

2004-12-16 Thread Neil Conway
Zeugswetter Andreas DAZ SD wrote:
This has the disadvantage of converging against 0 dirty pages.
A system that has less than maxpages dirty will write every page with 
every bgwriter run.
Yeah, I'm concerned about the bgwriter being overly aggressive if we 
disable bgwriter_percent. If we leave the settings as they are (delay = 
200, maxpages = 100, shared_buffers = 1000 by default), we will be 
writing all the dirty pages to disk every 2 seconds, which seems far too 
much.

It might also be good to reduce the delay, in order to more proactively 
keep the LRUs clean (e.g. scanning to find N dirty pages once per second 
is likely to reach father away from the LRU than scanning for N/M pages 
once per 1/M seconds). On the other hand the more often the bgwriter 
scans the buffer pool, the more times the BufMgrLock needs to be 
acquired -- and in a system in which pages aren't being dirtied very 
rapidly (or the dirtied pages tend to be very hot), each of those scans 
is going to take a while to find enough dirty pages using #2. So perhaps 
it is best to leave the delay as is for 8.0.

This might have the disadvantage of either leaving too much for the 
checkpoint or writing too many dirty pages in one run. Is writing a lot 
in one run actually a problem though ? Or does the bgwriter pause
periodically while writing the pages of one run ?
The bgwriter does not pause between writing pages. What would be the 
point of doing that? The kernel is going to be caching the write() anyway.

If this is expressed in pages it would naturally need to be more than the 
current maxpages (to accomodate for clean pages). The suggested 2% sounded 
way too low for me (that leaves 98% to the checkpoint).
I agree this might be a problem, but it doesn't necessarily leave 98% to 
be written at checkpoint: if the buffers in the LRU change over time, 
the set of pages searched by the bgwriter will also change. I'm not sure 
how quickly the pages near the LRU change in a "typical workload"; 
moreover, I think this would vary between different workloads.

-Neil
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] bgwriter changes

2004-12-16 Thread Zeugswetter Andreas DAZ SD

> > Only if you redefine the meaning of bgwriter_percent.  At present it's
> > defined by reference to the total number of dirty pages, and that can't
> > be known without collecting them all.
> > 
> > If it were, say, a percentage of the total length of the T1/T2 lists,
> > then we'd have some chance of stopping the scan early.

> The other way around would make sense. In order to avoid writing the 
> busiest buffers at all (except for checkpoinging), the parameter should 
> mean "don't scan the last x% of the queue at all".

Your meaning is 1 - above meaning (at least that is what Tom and I meant),
but is probably easier to understand (== Informix LRU_MIN_DIRTY).

> Still, we need to avoid scanning over all the clean blocks of a large 
> buffer pool, so there is need for a separate dirty-LRU.

Maybe a "may be dirty" bitmap would be easier to do without beeing deadlock 
prone ?

Andreas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] bgwriter changes

2004-12-15 Thread Mark Kirkwood

Simon Riggs wrote:
100pct.patch (SR)
Test results to date:
1. Mark Kirkwood ([HACKERS] [Testperf-general] BufferSync and bgwriter)
pgbench 1xCPU 1xDisk shared_buffers=1
showed 8.0RC1 had regressed compared with 7.4.6, but patch improved
performance significantly against 8.0RC1
 

It occurs to me that cranking up the number of transactions (say 
1000->10) and seeing if said regression persists would be 
interesting.  This would give the smoothing effect of the bgwriter (plus 
the ARC) a better chance to shine.

regards
Mark
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] bgwriter changes

2004-12-15 Thread Jan Wieck
On 12/15/2004 12:10 PM, Tom Lane wrote:
Jan Wieck <[EMAIL PROTECTED]> writes:
Still, we need to avoid scanning over all the clean blocks of a large 
buffer pool, so there is need for a separate dirty-LRU.
That's not happening, unless you want to undo the cntxDirty stuff,
with unknown implications for performance and deadlock safety.  It's
definitely not happening in 8.0 ;-)
Sure not.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] bgwriter changes

2004-12-15 Thread Tom Lane
Jan Wieck <[EMAIL PROTECTED]> writes:
> Still, we need to avoid scanning over all the clean blocks of a large 
> buffer pool, so there is need for a separate dirty-LRU.

That's not happening, unless you want to undo the cntxDirty stuff,
with unknown implications for performance and deadlock safety.  It's
definitely not happening in 8.0 ;-)

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] bgwriter changes

2004-12-15 Thread Jan Wieck
On 12/14/2004 2:40 PM, Tom Lane wrote:
"Zeugswetter Andreas DAZ SD" <[EMAIL PROTECTED]> writes:
Is it possible to do a patch that produces a dirty buffer list in LRU order
and stops early when eighter maxpages is reached or bgwriter_percent
pages are scanned ?
Only if you redefine the meaning of bgwriter_percent.  At present it's
defined by reference to the total number of dirty pages, and that can't
be known without collecting them all.
If it were, say, a percentage of the total length of the T1/T2 lists,
then we'd have some chance of stopping the scan early.
That definition is identical to a fixed maximum number of pages to write 
per call. And since that parameter exists too, it would be redundant.

The other way around would make sense. In order to avoid writing the 
busiest buffers at all (except for checkpoinging), the parameter should 
mean "don't scan the last x% of the queue at all".

Still, we need to avoid scanning over all the clean blocks of a large 
buffer pool, so there is need for a separate dirty-LRU.

Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: RE: Re: [HACKERS] bgwriter changes

2004-12-15 Thread simon

Zeugswetter Andreas DAZ SD <[EMAIL PROTECTED]> wrote on
15.12.2004, 15:33:16:
> 
> > The two alternative algorithms are similar, but have these 
> > differences:
> > The former (option (2)) finds a constant number of dirty pages, though
> > has varying search time.
> 
> This has the disadvantage of converging against 0 dirty pages.
> A system that has less than maxpages dirty will write every page with 
> every bgwriter run.

Yes, that is my issue with that algorithm it causes more contention
when there are less dirty pages.
 
> > The latter (option (3)) has constant search
> > time, yet finds a varying number of dirty pages.
> 
> This might have the disadvantage of either leaving too much for the 
> checkpoint or writing too many dirty pages in one run. Is writing a lot 
> in one run actually a problem though ? Or does the bgwriter pause
> periodically while writing the pages of one run ?
> If this is expressed in pages it would naturally need to be more than the 
> current maxpages (to accomodate for clean pages). The suggested 2% sounded 
> way too low for me (that leaves 98% to the checkpoint).

This remains to be seen. We have Mark Kirkwood's test results that show
that the algorithm may work better, but no large scale OSDL run as yet.

My view is that the 2% is misleading. The whole buffer list is like a
conveyor belt moving towards the LRU. It is my *conjecture* that
cleaning the LRU would be sufficient to clean the whole list
eventually. Blocks in the buffer list that always stay near the MRU
would be dirtied again quickly even if you did clean them, so if they
don't reach nearly to the LRU then there is less benefit in cleaning
them. (1%, 2% or 5% would need to be a tunable factor; 2% was the
suggested default)

If the bgwriter writes too often it would get in the way of other work,
so there is clearly an optimum setting for any workload.

> Also I think we are doing too frequent checkpoints with bgwriter in 
> place. Every 15-30 minutes should be sufficient, even for benchmarks.
> We need a tuned bgwriter for this though.

Well, yes, you're right. ...but the bug limiting us to 255 files
restricts us there for higher performance situations.

Best Regards, Simon Riggs

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] bgwriter changes

2004-12-15 Thread Zeugswetter Andreas DAZ SD

> The two alternative algorithms are similar, but have these 
> differences:
> The former (option (2)) finds a constant number of dirty pages, though
> has varying search time.

This has the disadvantage of converging against 0 dirty pages.
A system that has less than maxpages dirty will write every page with 
every bgwriter run.

> The latter (option (3)) has constant search
> time, yet finds a varying number of dirty pages.

This might have the disadvantage of either leaving too much for the 
checkpoint or writing too many dirty pages in one run. Is writing a lot 
in one run actually a problem though ? Or does the bgwriter pause
periodically while writing the pages of one run ?
If this is expressed in pages it would naturally need to be more than the 
current maxpages (to accomodate for clean pages). The suggested 2% sounded 
way too low for me (that leaves 98% to the checkpoint).

Also I think we are doing too frequent checkpoints with bgwriter in 
place. Every 15-30 minutes should be sufficient, even for benchmarks.
We need a tuned bgwriter for this though.

Andreas

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: Re: [HACKERS] bgwriter changes

2004-12-15 Thread simon

Zeugswetter Andreas DAZ SD <[EMAIL PROTECTED]> wrote on
15.12.2004, 11:39:44:
> 
> > > > and stops early when eighter maxpages is reached or bgwriter_percent
> > > > pages are scanned ?
> > > 
> > > Only if you redefine the meaning of bgwriter_percent.  At present it's
> > > defined by reference to the total number of dirty pages, and that can't
> > > be known without collecting them all.
> > > 
> > > If it were, say, a percentage of the total length of the T1/T2 lists,
> > > then we'd have some chance of stopping the scan early.
> > 
> > ...which was exactly what was proposed for option (3).
> 
> But the benchmark run was with bgwriter_percent 100. 

Yes, it was for run 211, but the patch that was used effectively
disabled bgwriter_percent in favour of looking only at
bgwriter_maxpages.

The patch used was not exactly what was being discussed here. In that
patch, StrategyDirtyBufferList scans until it find bgwriter_maxpages
dirty pages, then stops. That means a varying number of buffers on the
list are scanned, starting from the LRU.

What is being suggested here was implemented for bg2.patch. The
algorithm in there was for StrategyDirtyBufferList to scan until it had
looked at the dirty/clean status of bgwriter_maxpages buffers. That
means a constant number of buffers on the list are scanned, starting
from the LRU. 

The two alternative algorithms are similar, but have these differences:
The former (option (2)) finds a constant number of dirty pages, though
has varying search time. The latter (option (3)) has constant search
time, yet finds a varying number of dirty pages. Both alternatives
avoid scanning the whole of the buffer list, as is the case in 8.0RC1,
allowing the bgwriter to act more frequently at lower cost.

There's some evidence that the second algorithm may be better, but may
have other characteristics or side-effects that we don't yet know. So
At this stage of the game, I'm happier not to progress option (3) any
further for r8.0, since option(2) is closest to the one that has been
through beta-testing.

Best Regards, Simon Riggs

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] bgwriter changes

2004-12-15 Thread Zeugswetter Andreas DAZ SD

> > > and stops early when eighter maxpages is reached or bgwriter_percent
> > > pages are scanned ?
> > 
> > Only if you redefine the meaning of bgwriter_percent.  At present it's
> > defined by reference to the total number of dirty pages, and that can't
> > be known without collecting them all.
> > 
> > If it were, say, a percentage of the total length of the T1/T2 lists,
> > then we'd have some chance of stopping the scan early.
> 
> ...which was exactly what was proposed for option (3).

But the benchmark run was with bgwriter_percent 100. I wanted to point out,
that I think 100% is too much (writes hot pages multiple times between 
checkpoints).
In the benchmark, bgwriter obviously falls behind, the delay is too long. But 
if you
reduce the delay you will start to see what I mean.

Actually I think what is really needed is a max number of pages we want dirty 
during checkpoint. Since that would again require scanning all pages, the next 
best
definition would imho be stop at a percentage (or a number of pages short) of 
total T1/T2. 
Then you can still calculate a worst case IO for checkpoint (assume that all 
hot pages are dirty) 

Andreas

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] bgwriter changes

2004-12-15 Thread Simon Riggs
On Tue, 2004-12-14 at 13:30, Neil Conway wrote:
> In recent discussion[1] with Simon Riggs, there has been some talk of 
> making some changes to the bgwriter. To summarize the problem, the 
> bgwriter currently scans the entire T1+T2 buffer lists and returns a 
> list of all the currently dirty buffers. It then selects a subset of 
> that list (computed using bgwriter_percent and bgwriter_maxpages) to 
> flush to disk. Not only does this mean we can end up scanning a 
> significant portion of shared_buffers for every invocation of the 
> bgwriter, we also do the scan while holding the BufMgrLock, likely 
> hurting scalability.

Neil's summary is very clear, many thanks.

There has been many suggestions, patches and test results, so I have
attempted to summarise everything here, using Neil's post to give
structure to the other information:

> I think a fix for this in some fashion is warranted for 8.0. Possible 
> solutions:

I add 2 things to this structure
i) the name of the patch that implements that (authors initials)
ii) benchmark results published that run those

> (1) Special-case bgwriter_percent=100. The only reason we need to return 
> a list of all the dirty buffers is so that we can choose n% of them to 
> satisfy bgwriter_percent. That is obviously unnecessary if we have 
> bgwriter_percent=100. I think this change won't help most users, 
> *unless* we also change bgwriter_percent=100 in the default configuration.

100pct.patch (SR)

Test results to date:
1. Mark Kirkwood ([HACKERS] [Testperf-general] BufferSync and bgwriter)
pgbench 1xCPU 1xDisk shared_buffers=1
showed 8.0RC1 had regressed compared with 7.4.6, but patch improved
performance significantly against 8.0RC1

Discounted now by both Neil and myself, since the same idea has been
more generally implemented as ideas (2) and (3) below.

> (2) Remove bgwriter_percent. I have yet to hear anyone argue that 
> there's an actual need for bgwriter_percent in tuning bgwriter behavior, 
> and one less GUC var is a good thing, all else being equal. This is 
> effectively the same as #1 with the default changed, only less flexibility.

There are 2 patches published which do same thing:
- Partially implemented following Neil's suggestion: bg3.patch (SR)
- Fully implemented: bgwriter_rem_percent-1.patch (NC)
Patches have an identical effect on performance.

Test results to date:
1. Neil's testing was "inconclusive" for shared_buffers = 2500 on a
single cpu, single disk system (test used bgwriter_rem_percent-1.patch)
2. Mark Wong's OSDL tests published as test 211
analysis already posted on this thread; 
dbt-2 4 CPU, many disk, shared_buffers=6 (test used bg3.patch)
3% overall benefit, greatly reduced max transaction times
3. Mark Kirkwood's tests
pgbench 2xCPU 2xdisk, shared_buffers=1 (test used
bgwriter_rem_percent-1.patch)
Showed slight regression against RC1 - must be test variability because
the patch does less work and is very unlikely to cause a regression

> (3) Change the meaning of bgwriter_percent, per Simon's proposal. Make 
> it mean "the percentage of the buffer pool to scan, at most, to look for 
> dirty buffers". I don't think this is workable, at least not at this 
> point in the release cycle, because it means we might not smooth of 
> checkpoint load, one of the primary goals of the bgwriter (in this 
> proposal bgwriter would only ever consider writing out a small subset of 
> the total shared buffer cache: the least-recently-used n%, with 2% being 
> a suggested default). Some variant of this might be worth exploring for 
> 8.1 though.

Implemented as bg2.patch (SR)
Contains a small bug, easily fixed, which would not effect performance

Test results to date:
1. Mark Kirkwood's tests
pgbench 2xCPU 2xdisk, shared_buffers=1 (test used bg2.patch)
Showed improvement on RC1 and best option out of all three tests
(compared RC1, bg2.patch, bgwriter_rem_percent-1.patch), possibly
similar within bounds of test variability - but interesting enough to
investigate further.

Current situation seems to be:
- all test results indicate performance regressions in RC1 when
shared_buffers >= 1 and using multi-cpu/multi-disk systems
- option (2) has the most thoroughly confirmable test results and is
thought by all parties to be the simplest and most robust approach.
- some more test results would be useful to compare, to ensure that
applying the patch would be useful in all circumstances.

Approach (3) looks interesting and should be investigated for 8.1, since
it introduces a subtlely different algorithm that may have "interesting
flight characteristics" and is more of a risk to the 8.0 release.

Thanks very much to all performance testers. It's important work.

-- 
Best Regards, Simon Riggs


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] bgwriter changes

2004-12-14 Thread Neil Conway
On Tue, 2004-12-14 at 09:23 -0500, Tom Lane wrote:
> At this point in the release cycle I'm not sure we should be making
> any significant changes for anything less than a crashing bug.

Yes, that's true, and I am definitely hesitant to make changes during
RC. That said, "adjust bgwriter defaults" has been on the "open items"
list for quite some time -- in some sense #2 is just a variant on that
idea.

> I'd want to see some pretty impressive benchmark results before we
> consider making a change now.

http://archives.postgresql.org/pgsql-hackers/2004-12/msg00426.php

is with a patch from Simon that implements #3. While that's not exactly
the same as #2, it does seem to suggest that the performance difference
is rather noticeable. If the problem does indeed exacerbate BufMgrLock
contention, it might be more noticeable still on an SMP machine.

I'm going to try and get some more benchmark data; if anyone else wants
to try the patch and contribute results they are welcome to.

-Neil



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] bgwriter changes

2004-12-14 Thread Simon Riggs
On Tue, 2004-12-14 at 19:40, Tom Lane wrote:
> "Zeugswetter Andreas DAZ SD" <[EMAIL PROTECTED]> writes:
> > Is it possible to do a patch that produces a dirty buffer list in LRU order
> > and stops early when eighter maxpages is reached or bgwriter_percent
> > pages are scanned ?
> 
> Only if you redefine the meaning of bgwriter_percent.  At present it's
> defined by reference to the total number of dirty pages, and that can't
> be known without collecting them all.
> 
> If it were, say, a percentage of the total length of the T1/T2 lists,
> then we'd have some chance of stopping the scan early.

...which was exactly what was proposed for option (3).

-- 
Best Regards, Simon Riggs


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] bgwriter changes

2004-12-14 Thread Tom Lane
"Zeugswetter Andreas DAZ SD" <[EMAIL PROTECTED]> writes:
> Is it possible to do a patch that produces a dirty buffer list in LRU order
> and stops early when eighter maxpages is reached or bgwriter_percent
> pages are scanned ?

Only if you redefine the meaning of bgwriter_percent.  At present it's
defined by reference to the total number of dirty pages, and that can't
be known without collecting them all.

If it were, say, a percentage of the total length of the T1/T2 lists,
then we'd have some chance of stopping the scan early.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] bgwriter changes

2004-12-14 Thread Zeugswetter Andreas DAZ SD
> (2) Remove bgwriter_percent. I have yet to hear anyone argue that 
> there's an actual need for bgwriter_percent in tuning 
> bgwriter behavior,

One argument for it is to avoid writing very hot pages.

> (3) Change the meaning of bgwriter_percent, per Simon's proposal. Make 
> it mean "the percentage of the buffer pool to scan, at most, to look for 
> dirty buffers". I don't think this is workable, at least not at this

a la long I think we want to avoid that checkpoint needs to do a lot of 
writing, without writing hot pages too often. This can only reasonably be 
defined with a max number of pages we want to allow dirty at checkpoint time. 
bgwriter_percent comes close to this meaning, although in this sense the value 
would need to be high, like 80%.

I think we do want 2 settings. Think of one as a short time value 
(so bgwriter does not write everything in one run) and one a long term
target over multiple runs.

Is it possible to do a patch that produces a dirty buffer list in LRU order
and stops early when eighter maxpages is reached or bgwriter_percent
pages are scanned ?

Andreas

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] bgwriter changes

2004-12-13 Thread Andrew Dunstan

Tom Lane wrote:
However, due consideration should also be given to
(4) Do nothing until 8.1.
At this point in the release cycle I'm not sure we should be making
any significant changes for anything less than a crashing bug.
 

If that's not the policy, then I don't understand the dev cycle state 
labels used.

In the commercial world, my approach would be that if this was 
determined to be necessary (about which I am moderately agnostic) then 
we would abort the current RC stage, effectively postponing the release.

cheers
andrew
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] bgwriter changes

2004-12-13 Thread Tom Lane
Neil Conway <[EMAIL PROTECTED]> writes:
> ...
> (2) Remove bgwriter_percent. I have yet to hear anyone argue that 
> there's an actual need for bgwriter_percent in tuning bgwriter behavior, 
> ...

Of the three offered solutions, I agree that that makes the most sense
(unless Jan steps up with a strong argument why this knob is needed).

However, due consideration should also be given to

(4) Do nothing until 8.1.

At this point in the release cycle I'm not sure we should be making
any significant changes for anything less than a crashing bug.

> A patch (implementing #2) is attached -- any benchmark results would be 
> helpful. Increasing shared_buffers (to 10,000 or more) should make the 
> problem noticeable.

I'd want to see some pretty impressive benchmark results before we
consider making a change now.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] bgwriter changes

2004-12-13 Thread Bruce Momjian
Neil Conway wrote:
> (2) Remove bgwriter_percent. I have yet to hear anyone argue that 
> there's an actual need for bgwriter_percent in tuning bgwriter behavior, 
> and one less GUC var is a good thing, all else being equal. This is 
> effectively the same as #1 with the default changed, only less flexibility.

I prefer #2, and agree with you and Simon that something has to be done
for 8.0.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 8: explain analyze is your friend


[HACKERS] bgwriter changes

2004-12-13 Thread Neil Conway
In recent discussion[1] with Simon Riggs, there has been some talk of 
making some changes to the bgwriter. To summarize the problem, the 
bgwriter currently scans the entire T1+T2 buffer lists and returns a 
list of all the currently dirty buffers. It then selects a subset of 
that list (computed using bgwriter_percent and bgwriter_maxpages) to 
flush to disk. Not only does this mean we can end up scanning a 
significant portion of shared_buffers for every invocation of the 
bgwriter, we also do the scan while holding the BufMgrLock, likely 
hurting scalability.

I think a fix for this in some fashion is warranted for 8.0. Possible 
solutions:

(1) Special-case bgwriter_percent=100. The only reason we need to return 
a list of all the dirty buffers is so that we can choose n% of them to 
satisfy bgwriter_percent. That is obviously unnecessary if we have 
bgwriter_percent=100. I think this change won't help most users, 
*unless* we also change bgwriter_percent=100 in the default configuration.

(2) Remove bgwriter_percent. I have yet to hear anyone argue that 
there's an actual need for bgwriter_percent in tuning bgwriter behavior, 
and one less GUC var is a good thing, all else being equal. This is 
effectively the same as #1 with the default changed, only less flexibility.

(3) Change the meaning of bgwriter_percent, per Simon's proposal. Make 
it mean "the percentage of the buffer pool to scan, at most, to look for 
dirty buffers". I don't think this is workable, at least not at this 
point in the release cycle, because it means we might not smooth of 
checkpoint load, one of the primary goals of the bgwriter (in this 
proposal bgwriter would only ever consider writing out a small subset of 
the total shared buffer cache: the least-recently-used n%, with 2% being 
a suggested default). Some variant of this might be worth exploring for 
8.1 though.

A patch (implementing #2) is attached -- any benchmark results would be 
helpful. Increasing shared_buffers (to 10,000 or more) should make the 
problem noticeable.

Opinions on which route is the best, or on some alternative solution? My 
inclination is toward #2, but I'm not dead-set on it.

-Neil
[1] http://archives.postgresql.org/pgsql-hackers/2004-12/msg00386.php
Index: doc/src/sgml/runtime.sgml
===
RCS file: /var/lib/cvs/pgsql/doc/src/sgml/runtime.sgml,v
retrieving revision 1.296
diff -c -r1.296 runtime.sgml
*** doc/src/sgml/runtime.sgml	13 Dec 2004 18:05:09 -	1.296
--- doc/src/sgml/runtime.sgml	14 Dec 2004 04:52:26 -
***
*** 1350,1382 
  
   Specifies the delay between activity rounds for the
   background writer.  In each round the writer issues writes
!  for some number of dirty buffers (controllable by the
!  following parameters).  The selected buffers will always be
!  the least recently used ones among the currently dirty
!  buffers.  It then sleeps for bgwriter_delay
!  milliseconds, and repeats.  The default value is 200. Note
!  that on many systems, the effective resolution of sleep
!  delays is 10 milliseconds; setting bgwriter_delay
!  to a value that is not a multiple of 10 may have the same
!  results as setting it to the next higher multiple of 10.
!  This option can only be set at server start or in the
!  postgresql.conf file.
! 
!
!   
! 
!   
!bgwriter_percent (integer)
!
! bgwriter_percent configuration parameter
!
!
! 
!  In each round, no more than this percentage of the currently
!  dirty buffers will be written (rounding up any fraction to
!  the next whole number of buffers).  The default value is
!  1. This option can only be set at server start or in the
!  postgresql.conf file.
  
 

--- 1350,1367 
  
   Specifies the delay between activity rounds for the
   background writer.  In each round the writer issues writes
!  for some number of dirty buffers (controllable by
!  bgwriter_maxpages).  The selected buffers
!  will always be the least recently used ones among the
!  currently dirty buffers.  It then sleeps for
!  bgwriter_delay milliseconds, and repeats.  The
!  default value is 200. Note that on many systems, the
!  effective resolution of sleep delays is 10 milliseconds;
!  setting bgwriter_delay to a value that is not a
!  multiple of 10 may have the same results as setting it to the
!  next higher multiple of 10.  This option can only be set at
!  server start or in the postgresql.conf
!  file.
  
 

***
*** 1398,1409 
   
  
   
!   Smaller values of bgwriter_percent and
!   bgwriter_maxpages reduce the extra