Re: [PERFORM] Background writer underemphasized ...

2008-04-22 Thread Greg Smith

On Sun, 20 Apr 2008, James Mansion wrote:

Are you suggesting that the disk subsystem has already decided on its 
strategy for a set of seeks and writes and will not insert new 
instructions into an existing elevator plan until it is completed and it 
looks at the new requests?


No, just that each component only gets to sort across what it sees, and 
because of that the sorting horizon may not be optimized the same way 
depending on how writes are sent.


Let me try to construct a credible example of this elusive phenomenon:

-We have a server with lots of RAM
-The disk controller cache has 256MB of cache
-We have 512MB of data to write that's spread randomly across the database 
disk.


Case 1:  Write early

Let's say the background writer writes a sample of 1/2 the data right now 
in anticipation of needing those buffers for something else soon.  It's 
now in the controller's cache and sorted already.  The controller is 
working on it.  Presume it starts at the beginning of the disk and works 
its way toward the end, seeking past gaps in between as needed.


The checkpoint hits just after that happens.  The remaining 256MB gets 
dumped into the OS buffer cache.  This gets elevator sorted by the OS, 
which will now write it out to the card in sorted order, beginning to end. 
But writes to the controller will block because most of the cache is 
filled, so they trickle in as data writes are completed and the cache gets 
space.  Let's presume they're all ignored, because the drive is working 
toward the end and these are closer to the beginning than the ones it's 
working on.


Now the disk is near the end of its logical space, and there's a cache 
full of new dirty checkpoint data.  But the OS has finished spooling all 
its dirty stuff into the cache so the checkpoint is over.  During that 
checkpoint the disk has to seek enough to cover the full logical "length" 
of the volume.  The controller will continue merrily writing now until its 
cache clears again, moving from the end of the disk back to the beginning 
again.


Case 2:  Delayed writes, no background writer use

The checkpoint hits.  512MB of data gets dumped into the OS cache.  It 
sorts and feeds that in sorted order into the cache.  Drive starts at the 
beginning and works it way through everything.  By the time it's finished 
seeking its way across half the disk, the OS is now unblocked becuase the 
remaining data is in the cache.


Can you see how in this second case, it may very well be that the 
checkpoint finishes *faster* because we waited longer to start writing? 
Because the OS has a much larger elevator sorting capacity than the disk 
controller, leaving data in RAM and waiting until there's more of it 
queued up there has approximately halved the number/size of seeks involved 
before the controller can say it's absorbed all the writes.


This sounds a bit tenuous at best - almost to the point of being a 
bug. Do you believe this is universal?


Of course not, or the background writer would be turned off by default. 
There are occasional reports where it just gets in the way, typically in 
ones where the controller has its own cache and there's a bad interaction 
there.


This is not unique to this situation, so in that sense this class of 
problems is universal.  There's all kinds of operating sytems 
configurations that are tuned to delay writing in hopes of making those 
writes more efficient, because the OS usually has a much larger capacity 
for buffering pages to optimize what's going to happen than the downstream 
controller/disk caches do.  Once you've filled a downstream cache, you may 
not be able to influence what that device executing those requests does 
anymore until that cache clears.


Note that the worst-case situation here actually gets worse in some 
respects the larger the downstream cache is, because there's that much 
more data you have to wait to clear before you can necessarily influence 
what the disks are doing if you've made a bad choice in what you asked it 
to write early.  If the disk head is too far away from where you want to 
write or read to now, you can be in for quite a wait before it gets back 
your way if the filled cache is large.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-20 Thread James Mansion

Greg Smith wrote:
If you write a giant block of writes, those tend to be sorted by the 
OS and possibly the controller to reduce total seeks.  That's a pretty 
efficient write and it can clear relatively fast.


But if you're been trickling writes in an unstructured form and in low 
volume, there can be a stack of them that aren't sorted well blocking 
the queue from clearing.  With a series of small writes, it's not that 
difficult to end up in a situation where a controller cache is filled 
with writes containing a larger seek component than you'd have gotten 
had you written in larger blocks that took advantage of more OS-level 
elevator sorting.  There's actually a pending patch to try and improve 
this situation in regards to checkpoint writes in the queue.


Seeks are so slow compared to more sequential writes that you really 
can end up in the counterintuitive situation that you finish faster by 
avoiding early writes, even in cases when the disk is the bottleneck.

I'm sorry but I am somewhat unconvinced by this.

I accept that by early submission the disk subsystem may end up doing 
more seeks and more writes in total, but when the dam breaks at the 
start of the checkpoint, how can it help to have _more_ data write 
volume and _more_ implied seeks offered up at that point?


Are you suggesting that the disk subsystem has already decided on its 
strategy for a set of seeks and writes and will not insert new 
instructions into an existing elevator plan until it is completed and it 
looks at the new requests? This sounds a bit tenuous at best - almost to 
the point of being a bug. Do you believe this is universal?


James


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-19 Thread Greg Smith

On Sat, 19 Apr 2008, James Mansion wrote:

But isn't it the case that while using background writer might result in 
*slightly* more data to write (since data that is updated several times 
might actually be sent several times), the total amount of data in both 
cases is much the same?


Really depends on your workload, how many wasted writes there are.  It 
might be significant, it might only be slight.


And if the buffer backed up in the BGW case, wouldn't it also back up 
(more?) if the writes are deferred?  And in fact by sending earlier, the 
real bottleneck (the disks) could have been getting on with it and 
staring their IO earlier?


If you write a giant block of writes, those tend to be sorted by the OS 
and possibly the controller to reduce total seeks.  That's a pretty 
efficient write and it can clear relatively fast.


But if you're been trickling writes in an unstructured form and in low 
volume, there can be a stack of them that aren't sorted well blocking the 
queue from clearing.  With a series of small writes, it's not that 
difficult to end up in a situation where a controller cache is filled with 
writes containing a larger seek component than you'd have gotten had you 
written in larger blocks that took advantage of more OS-level elevator 
sorting.  There's actually a pending patch to try and improve this 
situation in regards to checkpoint writes in the queue.


Seeks are so slow compared to more sequential writes that you really can 
end up in the counterintuitive situation that you finish faster by 
avoiding early writes, even in cases when the disk is the bottleneck.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-19 Thread James Mansion

Greg Smith wrote:
Using the background writer more assures that the cache on the 
controller is going to be written to aggressively, so it may be 
somewhat filled already come checkpoint time.  If you leave the writer 
off, when the checkpoint comes you're much more likely to have the 
full 2GB available to absorb a large block of writes.
But isn't it the case that while using background writer might result in 
*slightly* more data to write (since data that is updated several times 
might actually be sent several times), the total amount of data in both 
cases is much the same?  And if the buffer backed up in the BGW case, 
wouldn't it also back up (more?) if the writes are deferred?  And in 
fact by sending earlier, the real bottleneck (the disks) could have been 
getting on with it and staring their IO earlier?


Can you explian your reasoning a bit more?

James


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-19 Thread Greg Smith

On Thu, 17 Apr 2008, Marinos Yannikos wrote:

Controller is 
http://www.infortrend.com/main/2_product/es_a08(12)f-g2422.asp with 2GB 
cache (writeback was enabled).


Ah.  Sometimes these fiber channel controllers can get a little weird 
(compared with more direct storage) when the cache gets completely filled. 
If you think about it, flushing 2GB out takes takes a pretty significant 
period amount of time even at 4Gbps, and once all the caches at every 
level are filled it's possible for that to turn into a bottleneck.


Using the background writer more assures that the cache on the controller 
is going to be written to aggressively, so it may be somewhat filled 
already come checkpoint time.  If you leave the writer off, when the 
checkpoint comes you're much more likely to have the full 2GB available to 
absorb a large block of writes.


You suggested a documentation update; it would be fair to suggest that 
there are caching/storage setups where even the 8.3 BGW might just be 
getting in the way.  The right thing to do there is just turn it off 
altogether, which should work a bit better than the exact tuning you 
suggested.


Perhaps the background writer takes too long to find the required number of 
dirty pages among the 16GB shared buffers (currently), which should be mostly 
clean.


That would only cause a minor increase in CPU usage.  You certainly don't 
want to reduce shared_buffers for all the reasons you list.


I was under the impression that wal_buffers should be kept at/above the size 
of tyical transactions.


It doesn't have to be large enough to hold a whole transaction, just big 
enough that when it fills and a write is forced that write isn't trivially 
small (and therefore wasteful in terms of I/O size).  There's a fairly 
good discussion of what's actually involved here at 
http://archives.postgresql.org/pgsql-advocacy/2003-02/msg00053.php ; as I 
suggested, I've seen and heard others report small improvements in raising 
from the tiny default value to the small MB range, but beyond that you're 
just wasting RAM that could buffer database pages instead.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-17 Thread Marinos Yannikos

Greg Smith schrieb:
You also didn't mention what disk controller you have, or how much write 
cache it has (if any).


8.3.1, Controller is 
http://www.infortrend.com/main/2_product/es_a08(12)f-g2422.asp with 2GB 
cache (writeback was enabled).


That's almost turning the background writer off.  If that's what 
improved your situation, you might as well as turn it off altogether by 
setting all the bgwriter_lru_maxpages parameters to be 0.  The 
combination you describe here, running very infrequently but with 
lru_maxpages set to its maximum, is a bit odd.


Perhaps the background writer takes too long to find the required number 
of dirty pages among the 16GB shared buffers (currently), which should 
be mostly clean. We could reduce the shared buffers to a more commonly 
used amount (<= 2GB or so) but some of our most frequently used tables 
are in the 8+ GB range and sequential scans are much faster with this 
setting (for ~, ~* etc.).


Other options we have tried/used were shared_buffers between 200MB and 
20GB, wal_buffers = 256MB, wal_writer_delay=5000ms ...


The useful range for wal_buffers tops at around 1MB, so no need to get 
extreme there.  wal_writer_delay shouldn't matter here unless you turned 
on asyncronous commit.


I was under the impression that wal_buffers should be kept at/above the 
size of tyical transactions. We do have some large-ish ones that are 
time-critical.


-mjy

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-16 Thread Bill Moran
In response to Greg Smith <[EMAIL PROTECTED]>:

> On Wed, 16 Apr 2008, Bill Moran wrote:
> 
> >> bgwriter_delay = 1ms # 10-1ms between rounds
> >> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round
> > Have you watched closely under load to ensure that you're not seeing a 
> > huge performance hit every 10s when the bgwriter kicks off?
> 
> bgwriter_lru_maxpages = 1000 means that any background writer pass can 
> write at most 1000 pages = 8MB.  Those are buffered writes going into the 
> OS cache, which it will write out at its own pace later.  That isn't going 
> to cause a performance hit when it happens.
> 
> That isn't the real mystery though--where's the RAID5 rant I was expecting 
> from you?

Oh crap ... he _is_ using RAID-5!  I completely missed an opportunity to
rant!

blah blah blah ... RAID-5 == evile, etc ...

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-16 Thread Greg Smith

On Wed, 16 Apr 2008, Bill Moran wrote:


bgwriter_delay = 1ms # 10-1ms between rounds
bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round
Have you watched closely under load to ensure that you're not seeing a 
huge performance hit every 10s when the bgwriter kicks off?


bgwriter_lru_maxpages = 1000 means that any background writer pass can 
write at most 1000 pages = 8MB.  Those are buffered writes going into the 
OS cache, which it will write out at its own pace later.  That isn't going 
to cause a performance hit when it happens.


That isn't the real mystery though--where's the RAID5 rant I was expecting 
from you?


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-16 Thread Greg Smith

On Wed, 16 Apr 2008, Marinos Yannikos wrote:

to save some people a headache or two: I believe we just solved our 
performance problem in the following scenario:


I was about to ask your PostgreSQL version but since I see you mention 
wal_writer_delay it must be 8.3.  Knowing your settings for shared_buffers 
and checkpoint_segments in particular would make this easier to 
understand.


You also didn't mention what disk controller you have, or how much write 
cache it has (if any).



This helped with our configuration:
bgwriter_delay = 1ms # 10-1ms between rounds
bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round


The default for bgwriter_delay is 200ms = 5 passes/second.  You're 
increasing that to 1ms means one pass every 10 seconds instead. 
That's almost turning the background writer off.  If that's what improved 
your situation, you might as well as turn it off altogether by setting all 
the bgwriter_lru_maxpages parameters to be 0.  The combination you 
describe here, running very infrequently but with lru_maxpages set to its 
maximum, is a bit odd.


Other options we have tried/used were shared_buffers between 200MB and 
20GB, wal_buffers = 256MB, wal_writer_delay=5000ms ...


The useful range for wal_buffers tops at around 1MB, so no need to get 
extreme there.  wal_writer_delay shouldn't matter here unless you turned 
on asyncronous commit.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-16 Thread Bill Moran
In response to Marinos Yannikos <[EMAIL PROTECTED]>:

> Hi,
> 
> to save some people a headache or two: I believe we just solved our 
> performance problem in the following scenario:
> 
> - Linux 2.6.24.4
> - lots of RAM (32GB)
> - enough CPU power (4 cores)
> - disks with relatively slow random writes (SATA RAID-5 / 7 disks, 128K 
> stripe, ext2)
> 
> Our database is around 86GB, the busy parts being 20-30GB. Typical load 
> is regular reads of all sizes (large joins, sequential scans on a 8GB 
> table, many small selects with few rows) interspersed with writes of 
> several 1000s rows on the busier tables by several clients.
> 
> After many tests and research revolving around the Linux I/O-Schedulers 
> (which still have some issues one should be wary about: 
> http://lwn.net/Articles/216853/) because we saw problems when occasional 
> (intensive) writes completely starved all I/O, we discovered that 
> changing the default settings for the background writer seems to have 
> solved all these problems. Performance is much better now with fsync on 
> than it was with fsync off previously, no other configuration options 
> had a noticeable effect on performance (or these problems rather).
> 
> This helped with our configuration:
> bgwriter_delay = 1ms # 10-1ms between rounds
> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round

What other values have you tried for this?  Have you watched closely
under load to ensure that you're not seeing a huge performance hit
every 10s when the bgwriter kicks off?

I'm with Chris -- I would be inclined to try a range of values to find
a sweet spot, and I would be _very_ shocked to find that sweet spot
at the values you mention.  However, if that really is the demonstrable
sweet spot, there may be something we all can learn.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-16 Thread Chris Browne
[EMAIL PROTECTED] (Marinos Yannikos) writes:
> This helped with our configuration:
> bgwriter_delay = 1ms # 10-1ms between rounds
> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round

FYI, I'd be inclined to reduce both of those numbers, as it should
reduce the variability of behaviour.

Rather than cleaning 1K pages every 10s, I would rather clean 100
pages every 1s, as that will have much the same effect, but spread the
work more evenly.  Or perhaps 10 pages every 100ms...

Cut the delay *too* low and this might make the background writer, in
effect, poll *too* often, and start chewing resources, but there's
doubtless some "sweet spot" in between...
-- 
"cbbrowne","@","cbbrowne.com"
http://linuxdatabases.info/info/oses.html
"For systems, the analogue of a face-lift is to add to the control
graph an edge that creates a cycle, not just an additional node."
-- Alan J. Perlis

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Background writer underemphasized ...

2008-04-16 Thread Marinos Yannikos

Hi,

to save some people a headache or two: I believe we just solved our 
performance problem in the following scenario:


- Linux 2.6.24.4
- lots of RAM (32GB)
- enough CPU power (4 cores)
- disks with relatively slow random writes (SATA RAID-5 / 7 disks, 128K 
stripe, ext2)


Our database is around 86GB, the busy parts being 20-30GB. Typical load 
is regular reads of all sizes (large joins, sequential scans on a 8GB 
table, many small selects with few rows) interspersed with writes of 
several 1000s rows on the busier tables by several clients.


After many tests and research revolving around the Linux I/O-Schedulers 
(which still have some issues one should be wary about: 
http://lwn.net/Articles/216853/) because we saw problems when occasional 
(intensive) writes completely starved all I/O, we discovered that 
changing the default settings for the background writer seems to have 
solved all these problems. Performance is much better now with fsync on 
than it was with fsync off previously, no other configuration options 
had a noticeable effect on performance (or these problems rather).


This helped with our configuration:
bgwriter_delay = 1ms # 10-1ms between rounds
bgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round

Previously, our typical writes resulted in around 5-10MB/s going to disk 
and some reads stalling, now we are seeing typical disk I/O in the 
30-60MB/s range with write load present and no noticeable problems with 
reads except when autovacuum's "analyze" is running. Other options we 
have tried/used were shared_buffers between 200MB and 20GB, wal_buffers 
= 256MB, wal_writer_delay=5000ms ...


So, using this is highly recommended and I would say that the 
documentation does not do it justice... (and yes, I could have figured 
it out earlier)


-mjy

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance