Re: [PERFORM] Background writer underemphasized ...

2008-04-19 Thread Greg Smith

On Thu, 17 Apr 2008, Marinos Yannikos wrote:

Controller is 
http://www.infortrend.com/main/2_product/es_a08(12)f-g2422.asp with 2GB 
cache (writeback was enabled).


Ah.  Sometimes these fiber channel controllers can get a little weird 
(compared with more direct storage) when the cache gets completely filled. 
If you think about it, flushing 2GB out takes takes a pretty significant 
period amount of time even at 4Gbps, and once all the caches at every 
level are filled it's possible for that to turn into a bottleneck.


Using the background writer more assures that the cache on the controller 
is going to be written to aggressively, so it may be somewhat filled 
already come checkpoint time.  If you leave the writer off, when the 
checkpoint comes you're much more likely to have the full 2GB available to 
absorb a large block of writes.


You suggested a documentation update; it would be fair to suggest that 
there are caching/storage setups where even the 8.3 BGW might just be 
getting in the way.  The right thing to do there is just turn it off 
altogether, which should work a bit better than the exact tuning you 
suggested.


Perhaps the background writer takes too long to find the required number of 
dirty pages among the 16GB shared buffers (currently), which should be mostly 
clean.


That would only cause a minor increase in CPU usage.  You certainly don't 
want to reduce shared_buffers for all the reasons you list.


I was under the impression that wal_buffers should be kept at/above the size 
of tyical transactions.


It doesn't have to be large enough to hold a whole transaction, just big 
enough that when it fills and a write is forced that write isn't trivially 
small (and therefore wasteful in terms of I/O size).  There's a fairly 
good discussion of what's actually involved here at 
http://archives.postgresql.org/pgsql-advocacy/2003-02/msg00053.php ; as I 
suggested, I've seen and heard others report small improvements in raising 
from the tiny default value to the small MB range, but beyond that you're 
just wasting RAM that could buffer database pages instead.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Background writer underemphasized ...

2008-04-19 Thread James Mansion

Greg Smith wrote:
Using the background writer more assures that the cache on the 
controller is going to be written to aggressively, so it may be 
somewhat filled already come checkpoint time.  If you leave the writer 
off, when the checkpoint comes you're much more likely to have the 
full 2GB available to absorb a large block of writes.
But isn't it the case that while using background writer might result in 
*slightly* more data to write (since data that is updated several times 
might actually be sent several times), the total amount of data in both 
cases is much the same?  And if the buffer backed up in the BGW case, 
wouldn't it also back up (more?) if the writes are deferred?  And in 
fact by sending earlier, the real bottleneck (the disks) could have been 
getting on with it and staring their IO earlier?


Can you explian your reasoning a bit more?

James


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Exact index overhead

2008-04-19 Thread Scott Marlowe
On Thu, Apr 17, 2008 at 9:42 AM, Gunther Mayer
[EMAIL PROTECTED] wrote:
 Pavan Deolasee wrote:


  2. The existing block should have enough free space to accommodate the
  new version
  A less than 100 fillfactor may help you given your rate of updates.
 
 
  I see, as soon as a new block is required for the new version the index
 pointer needs updating too, I understand now. But at least in the common
 case of space being available the index overhead is reduced to zero. I can
 live with that.

Quick clarification, it's the table, not the index that has to have
free space for the new row version.  This rewards good normalization
practices (narrower rows) and a lower fill factor.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Oddly slow queries

2008-04-19 Thread Scott Marlowe
On Wed, Apr 16, 2008 at 3:48 PM, Thomas Spreng [EMAIL PROTECTED] wrote:

  On 16.04.2008, at 17:42, Chris Browne wrote:

  [EMAIL PROTECTED] (Thomas Spreng) writes:
 
   On 16.04.2008, at 01:24, PFC wrote:
  
   
   
 The queries in question (select's) occasionally take up to 5 mins
 even if they take ~2-3 sec under normal conditions, there are no
 sequencial scans done in those queries. There are not many users
 connected (around 3, maybe) to this database usually since it's
 still in a testing phase. I tried to hunt down the problem by
 playing around with resource usage cfg options but it didn't really
 made a difference.

   
   Could that be caused by a CHECKPOINT ?
   
  
   actually there are a few log (around 12 per day) entries concerning
   checkpoints:
  
   LOG:  checkpoints are occurring too frequently (10 seconds apart)
   HINT:  Consider increasing the configuration parameter
   checkpoint_segments.
  
   But wouldn't that only affect write performance? The main problems I'm
   concerned about affect SELECT queries.
  
 
  No, that will certainly NOT just affect write performance; if the
  postmaster is busy writing out checkpoints, that will block SELECT
  queries that are accessing whatever is being checkpointed.
 

  What I meant is if there are no INSERT's or UPDATE's going on it shouldn't
  affect SELECT queries, or am I wrong?

But checkpoints only occur every 10 seconds because of a high insert /
update rate.  So, there ARE inserts and updates going on, and a lot of
them, and they are blocking your selects when checkpoint hits.

While adjusting your background writer might be called for, and might
provide you with some relief, you REALLY need to find out what's
pushing so much data into your db at once that it's causing a
checkpoint storm.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Oddly slow queries

2008-04-19 Thread Christopher Browne
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Thomas Spreng) 
wrote:
 On 16.04.2008, at 17:42, Chris Browne wrote:
 [EMAIL PROTECTED] (Thomas Spreng) writes:
 On 16.04.2008, at 01:24, PFC wrote:

 The queries in question (select's) occasionally take up to 5 mins
 even if they take ~2-3 sec under normal conditions, there are no
 sequencial scans done in those queries. There are not many users
 connected (around 3, maybe) to this database usually since it's
 still in a testing phase. I tried to hunt down the problem by
 playing around with resource usage cfg options but it didn't really
 made a difference.

Could that be caused by a CHECKPOINT ?

 actually there are a few log (around 12 per day) entries concerning
 checkpoints:

 LOG:  checkpoints are occurring too frequently (10 seconds apart)
 HINT:  Consider increasing the configuration parameter
 checkpoint_segments.

 But wouldn't that only affect write performance? The main problems
 I'm
 concerned about affect SELECT queries.

 No, that will certainly NOT just affect write performance; if the
 postmaster is busy writing out checkpoints, that will block SELECT
 queries that are accessing whatever is being checkpointed.

 What I meant is if there are no INSERT's or UPDATE's going on it
 shouldn't affect SELECT queries, or am I wrong?

Yes, that's right.  (Caveat: VACUUM would be a form of update, in this
context...)

 All the data modification tasks usually run at night, during the day
 there shouldn't be many INSERT's or UPDATE's going on.

 When we were on 7.4, we would *frequently* see SELECT queries that
 should be running Very Quick that would get blocked by the
 checkpoint flush.

 How did you actually see they were blocked by the checkpoint
 flushes?  Do they show up as separate processes?

We inferred this based on observed consistency of behaviour, and based
on having highly observant people like Andrew Sullivan around :-).

It definitely wasn't blatantly obvious.  It *might* be easier to see
in more recent versions, although BgWr makes the issue go away ;-).

 There are two things worth considering:

 1.  If the checkpoints are taking place too frequently, then that
 is clear evidence that something is taking place that is injecting
 REALLY heavy update load on your database at those times.

 If the postmaster is checkpointing every 10s, that implies Rather
 Heavy Load, so it is pretty well guaranteed that performance of
 other activity will suck at least somewhat because this load is
 sucking up all the I/O bandwidth that it can.

 So, to a degree, there may be little to be done to improve on this.

 I strongly assume that those log entries showed up at night when the
 heavy insert routines are being run. I'm more concerned about the
 query performance under normal conditions when there are very few
 modifications done.

I rather thought as much.

You *do* have to accept that when you get heavy update load, there
will be a lot of I/O, and in the absence of disk array fairies that
magically make bits get to the disks via automated mental telepathy
;-), you have to live with the notion that there will be *some*
side-effects on activity taking place at such times.

Or you have to spend, spend, spend on heftier hardware.  Sometimes too
expensive...

 2.  On the other hand, if you're on 8.1 or so, you may be able to
 configure the Background Writer to incrementally flush checkpoint data
 earlier, and avoid the condition of 1.

 Mind you, you'd have to set BgWr to be pretty aggressive, based on the
 10s periodicity that you describe; that may not be a nice
 configuration to have all the time :-(.

 I've just seen that the daily vacuum tasks didn't run,
 apparently. The DB has almost doubled it's size since some days
 ago. I guess I'll have to VACUUM FULL (dump/restore might be faster,
 though) and check if that helps anything.

If you're locking out users, then it's probably a better idea to use
CLUSTER to reorganize the tables, as that simultaneously eliminates
empty space on tables *and indices.*

In contrast, after running VACUUM FULL, you may discover you need to
reindex tables, because the reorganization of the *table* leads to
bloating of the indexes.

Pre-8.3 (I *think*), there's a transactional issue with CLUSTER where
it doesn't fully follow MVCC, so that dead, but still accessible, to
certain transactions tuples go away.  That can cause surprises
(e.g. - queries missing data) if applications are accessing the
database concurrently with the CLUSTER.  It's safe as long as the DBA
can take over the database and block out applications.  And at some
point, the MVCC bug got fixed.

Note that you should check the output of a VACUUM VERBOSE run, and/or
use the contrib function pgsstattuples() to check how sparse the
storage usage is.  There may only be a few tables that are behaving
badly, and cleaning up a few tables will be a lot less intrusive than
cleaning up the whole database.

 Does a bloated DB affect the performance 

Re: [PERFORM] Background writer underemphasized ...

2008-04-19 Thread Greg Smith

On Sat, 19 Apr 2008, James Mansion wrote:

But isn't it the case that while using background writer might result in 
*slightly* more data to write (since data that is updated several times 
might actually be sent several times), the total amount of data in both 
cases is much the same?


Really depends on your workload, how many wasted writes there are.  It 
might be significant, it might only be slight.


And if the buffer backed up in the BGW case, wouldn't it also back up 
(more?) if the writes are deferred?  And in fact by sending earlier, the 
real bottleneck (the disks) could have been getting on with it and 
staring their IO earlier?


If you write a giant block of writes, those tend to be sorted by the OS 
and possibly the controller to reduce total seeks.  That's a pretty 
efficient write and it can clear relatively fast.


But if you're been trickling writes in an unstructured form and in low 
volume, there can be a stack of them that aren't sorted well blocking the 
queue from clearing.  With a series of small writes, it's not that 
difficult to end up in a situation where a controller cache is filled with 
writes containing a larger seek component than you'd have gotten had you 
written in larger blocks that took advantage of more OS-level elevator 
sorting.  There's actually a pending patch to try and improve this 
situation in regards to checkpoint writes in the queue.


Seeks are so slow compared to more sequential writes that you really can 
end up in the counterintuitive situation that you finish faster by 
avoiding early writes, even in cases when the disk is the bottleneck.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance