"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> I think you are assuming that the next write of the same block won't
> use another OS cache block. I doubt if thats the way writes are handled
> by the kernel. Each write would typically end up being queued up in the
> kernel
> where each write will ha
Pavan Deolasee escribió:
> On 7/11/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
> >
> >I was able
> >to reproduce the phenomenon with a simple C program that writes 8k
> >blocks in random order to a fixed size file. I've attached it along with
> >output of running it on my test server. The out
On 7/11/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
I was able
to reproduce the phenomenon with a simple C program that writes 8k
blocks in random order to a fixed size file. I've attached it along with
output of running it on my test server. The output shows how the writes
start to period
In the last couple of days, I've been running a lot of DBT-2 tests and
smaller microbenchmarks with different bgwriter settings and
experimental patches, but I have not been able to produce a repeatable
test case where any of the bgwriter configurations perform better than
not having bgwriter a
On Fri, 2007-07-06 at 10:55 +0100, Heikki Linnakangas wrote:
> We need to get the requirements straight.
>
> One goal of bgwriter is clearly to keep just enough buffers clean in
> front of the clock hand so that backends don't need to do writes
> themselves until the next bgwriter iteration. Bu
On Thu, 2007-07-05 at 21:50 +0100, Heikki Linnakangas wrote:
> All test runs were also patched to count the # of buffer allocations,
> and # of buffer flushes performed by bgwriter and backends. Here's those
> results (I hope the intendation gets through properly):
>
> imo
On Fri, 6 Jul 2007, Heikki Linnakangas wrote:
To strike a balance between cleaning buffers ahead of possible bursts in the
future and not doing unnecessary I/O when no such bursts come, I think a
reasonable strategy is to write buffers with usage_count=0 at a slow pace
when there's no buffer a
"Tom Lane" <[EMAIL PROTECTED]> writes:
>> That would be overly aggressive on a workload that's steady on average,
>> but consists of small bursts. Like this: 0 0 0 0 100 0 0 0 0 100 0 0 0 0
>> 100. You'd end up writing ~100 pages on every bgwriter round, but you
>> only need an average of 20 pa
On Fri, 6 Jul 2007, Heikki Linnakangas wrote:
I've been running these test with bgwriter_delay of 10 ms, which is probably
too aggressive.
Even on relatively high-end hardware, I've found it hard to get good
results out of the BGW with the delay under 50ms--particularly when trying
to do som
Heikki Linnakangas wrote:
I scheduled a test with the moving average method as well, we'll see how
that fares.
No too well :(.
Strange. The total # of writes is on par with having bgwriter disabled,
but the physical I/O graphs show more I/O (on par with the aggressive
bgwriter), and the resp
On Fri, 6 Jul 2007, Tom Lane wrote:
The problem is that it'd be very hard to track how far ahead of the
recycling sweep hand we are, because that number has to be measured
in usage-count-zero pages. I see no good way to know how many of the
pages we scanned before have been touched (and given n
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
imola-336 imola-337 imola-340
writes by checkpoint 38302 30410 39529
writes by bgwriter 350113 2205782 1418672
writes by backends 1834333
Greg Smith <[EMAIL PROTECTED]> writes:
> On Thu, 5 Jul 2007, Tom Lane wrote:
>> This would give us a safety margin such that buffers_to_clean is not
>> less than the largest demand observed in the last 100 iterations...and
>> it takes quite a while for the memory of a demand spike to be forgotten
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
buffers_to_clean = Max(buffers_used * 1.1,
buffers_to_clean * 0.999);
That would be overly aggressive on a workload that's steady on average,
but consists of small bursts. Like this: 0 0 0 0 100 0 0 0 0 100 0 0 0
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> buffers_to_clean = Max(buffers_used * 1.1,
>> buffers_to_clean * 0.999);
> That would be overly aggressive on a workload that's steady on average,
> but consists of small bursts. Like this: 0 0 0 0 100 0 0 0 0 100 0 0 0 0
> 100.
Greg Smith wrote:
On Fri, 6 Jul 2007, Heikki Linnakangas wrote:
There's something wrong with that. The number of buffer allocations
shouldn't depend on the bgwriter strategy at all.
I was seeing a smaller (closer to 5%) increase in buffer allocations
switching from no background writer to us
On Fri, 6 Jul 2007, Heikki Linnakangas wrote:
There's something wrong with that. The number of buffer allocations shouldn't
depend on the bgwriter strategy at all.
I was seeing a smaller (closer to 5%) increase in buffer allocations
switching from no background writer to using the stock one b
Greg Smith wrote:
As you can see, I achieved the goal of almost never having a backend
write its own buffer, so yeah for that. That's the only good thing I
can say about it though. The TPS results take a moderate dive, and
there's about 10% more buffer allocations. The big and obvious issues
I just got my own first set of useful tests of using the new "remember
where you last scanned to" BGW implementation suggested by Tom. What I
did was keep the exiting % to scan, but cut back the number to scan when
so close to a complete lap ahead of the strategy point that I'd cross it
if I s
Tom Lane wrote:
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
imola-336 imola-337 imola-340
writes by checkpoint 38302 30410 39529
writes by bgwriter 350113 2205782 1418672
writes by backends 1834333
Greg Smith wrote:
On Thu, 5 Jul 2007, Heikki Linnakangas wrote:
It looks like Tom's idea is not a winner; it leads to more writes than
necessary.
What I came away with as the core of Tom's idea is that the cleaning/LRU
writer shouldn't ever scan the same section of the buffer cache twice,
b
On Thu, 5 Jul 2007, Tom Lane wrote:
This would give us a safety margin such that buffers_to_clean is not
less than the largest demand observed in the last 100 iterations...and
it takes quite a while for the memory of a demand spike to be forgotten
completely.
If you tested this strategy even
On Thu, 5 Jul 2007, Heikki Linnakangas wrote:
It looks like Tom's idea is not a winner; it leads to more writes than
necessary.
What I came away with as the core of Tom's idea is that the cleaning/LRU
writer shouldn't ever scan the same section of the buffer cache twice,
because anything tha
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> imola-336 imola-337 imola-340
> writes by checkpoint38302 30410 39529
> writes by bgwriter 350113 2205782 1418672
> writes by backends1834333 265755 7
I ran some DBT-2 tests to compare different bgwriter strategies:
http://community.enterprisedb.com/bgwriter/
imola-336 was run with minimal bgwriter settings, so that most writes
are done by backends. imola-337 was patched with an implementation of
Tom's bgwriter idea, trying to aggressively k
25 matches
Mail list logo