On Saturday, April 13, 2013, Rodrigo Barboza wrote:

>
>
>
> On Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes 
> <jeff.ja...@gmail.com<javascript:_e({}, 'cvml', 'jeff.ja...@gmail.com');>
> > wrote:
>
>> On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza <
>> rodrigombu...@gmail.com <javascript:_e({}, 'cvml',
>> 'rodrigombu...@gmail.com');>> wrote:
>>
>>>
>>>
>>> I was receiving warning messages of pgstat timeout.
>>> I started looking for the solution for this problem and this doubt came
>>> out.
>>> But it doesn't seem to be the problem.
>>>
>>
>> Usually that just means that you are dirtying pages faster than your hard
>> drives can write them out, leading to IO contention.  Sometimes you can do
>> something about this by changing the work-load to dirty pages in more
>> compact regions, or by changing the configuration.  Often the only
>> effective thing you can do is buy more hard-drives.
>>
>
I meant congestion, not contention.


>
>>
> Would it help if I changed the checkpoint_completion_target to something
> close to 0.7 (mine is the default 0.5)
>

Probably not.  The practical difference between 0.5 and 0.7 is so small,
that even if it did make a difference at one place and time, it would stop
making a difference again in the future for no apparent reason.


> or raise the checkpoint_segments (it is 32 now)?
>

Maybe.  If you are dirtying the same set of pages over and over again, and
that set of pages fits in shared_buffers, then lengthening the time between
checkpoints could have a good effect.   Otherwise, it probably would not.

Your best bet may be just to try it and see.  But first, have you tried to
tune shared_buffers?

What are you doing?  Bulk loading?  Frantically updating a small number of
rows?

Do you have pg_xlog on a separate IO controller from the rest of the data?
 Do you have battery-backed/nonvolatile write cache?

Cheers,

Jeff

Reply via email to