Re: [PERFORM] Segment best size
On Saturday, April 13, 2013, Rodrigo Barboza wrote: On Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes jeff.ja...@gmail.comjavascript:_e({}, 'cvml', 'jeff.ja...@gmail.com'); wrote: On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza rodrigombu...@gmail.com javascript:_e({}, 'cvml', 'rodrigombu...@gmail.com'); wrote: I was receiving warning messages of pgstat timeout. I started looking for the solution for this problem and this doubt came out. But it doesn't seem to be the problem. Usually that just means that you are dirtying pages faster than your hard drives can write them out, leading to IO contention. Sometimes you can do something about this by changing the work-load to dirty pages in more compact regions, or by changing the configuration. Often the only effective thing you can do is buy more hard-drives. I meant congestion, not contention. Would it help if I changed the checkpoint_completion_target to something close to 0.7 (mine is the default 0.5) Probably not. The practical difference between 0.5 and 0.7 is so small, that even if it did make a difference at one place and time, it would stop making a difference again in the future for no apparent reason. or raise the checkpoint_segments (it is 32 now)? Maybe. If you are dirtying the same set of pages over and over again, and that set of pages fits in shared_buffers, then lengthening the time between checkpoints could have a good effect. Otherwise, it probably would not. Your best bet may be just to try it and see. But first, have you tried to tune shared_buffers? What are you doing? Bulk loading? Frantically updating a small number of rows? Do you have pg_xlog on a separate IO controller from the rest of the data? Do you have battery-backed/nonvolatile write cache? Cheers, Jeff
Re: [PERFORM] Segment best size
On Sun, Apr 14, 2013 at 7:34 PM, Jeff Janes jeff.ja...@gmail.com wrote: On Saturday, April 13, 2013, Rodrigo Barboza wrote: On Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes jeff.ja...@gmail.com wrote: On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza rodrigombu...@gmail.com wrote: I was receiving warning messages of pgstat timeout. I started looking for the solution for this problem and this doubt came out. But it doesn't seem to be the problem. Usually that just means that you are dirtying pages faster than your hard drives can write them out, leading to IO contention. Sometimes you can do something about this by changing the work-load to dirty pages in more compact regions, or by changing the configuration. Often the only effective thing you can do is buy more hard-drives. I meant congestion, not contention. Would it help if I changed the checkpoint_completion_target to something close to 0.7 (mine is the default 0.5) Probably not. The practical difference between 0.5 and 0.7 is so small, that even if it did make a difference at one place and time, it would stop making a difference again in the future for no apparent reason. or raise the checkpoint_segments (it is 32 now)? Maybe. If you are dirtying the same set of pages over and over again, and that set of pages fits in shared_buffers, then lengthening the time between checkpoints could have a good effect. Otherwise, it probably would not. Your best bet may be just to try it and see. But first, have you tried to tune shared_buffers? What are you doing? Bulk loading? Frantically updating a small number of rows? Do you have pg_xlog on a separate IO controller from the rest of the data? Do you have battery-backed/nonvolatile write cache? Cheers, Jeff My shared_buffer is tuned for 25% of memory. My pg_xlog is in the same device. It is not bulk loading, The battery I have to check, because the machine is not with me.
Re: [PERFORM] Segment best size
On Fri, Apr 12, 2013 at 10:20 PM, Jeff Janes jeff.ja...@gmail.com wrote: On Friday, April 12, 2013, Rodrigo Barboza wrote: Hi guys. I compiled my postrges server (9.1.4) with default segment size (16MB). Should it be enough? Should I increase this size in compilation? To recommend that you deviate from the defaults, we would have to know something about your server, or how you intend to use it. But you haven't told us any of those things. Is there something about your intended use of the server that made you wonder about this setting in particular? Anyway, wal seg size is one of the last things I would worry about changing. Also, I suspect non-default settings of that parameter get almost zero testing (either alpha, beta, or in the field), and so would consider it mildly dangerous to deploy to production. Cheers, Jeff I was receiving warning messages of pgstat timeout. I started looking for the solution for this problem and this doubt came out. But it doesn't seem to be the problem.
Re: [PERFORM] Segment best size
On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza rodrigombu...@gmail.comwrote: I was receiving warning messages of pgstat timeout. I started looking for the solution for this problem and this doubt came out. But it doesn't seem to be the problem. Usually that just means that you are dirtying pages faster than your hard drives can write them out, leading to IO contention. Sometimes you can do something about this by changing the work-load to dirty pages in more compact regions, or by changing the configuration. Often the only effective thing you can do is buy more hard-drives. Cheers, Jeff
Re: [PERFORM] Segment best size
On Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes jeff.ja...@gmail.com wrote: On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza rodrigombu...@gmail.com wrote: I was receiving warning messages of pgstat timeout. I started looking for the solution for this problem and this doubt came out. But it doesn't seem to be the problem. Usually that just means that you are dirtying pages faster than your hard drives can write them out, leading to IO contention. Sometimes you can do something about this by changing the work-load to dirty pages in more compact regions, or by changing the configuration. Often the only effective thing you can do is buy more hard-drives. Cheers, Jeff Would it help if I changed the checkpoint_completion_target to something close to 0.7 (mine is the default 0.5) or raise the checkpoint_segments (it is 32 now)?
[PERFORM] Segment best size
Hi guys. I compiled my postrges server (9.1.4) with default segment size (16MB). Should it be enough? Should I increase this size in compilation?
Re: [PERFORM] Segment best size
On Apr 12, 2013, at 9:45 AM, Rodrigo Barboza wrote: Hi guys. I compiled my postrges server (9.1.4) with default segment size (16MB). Should it be enough? Should I increase this size in compilation? Unlike some default values in the configuration file, the compiled-in defaults work well for most people. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Segment best size
On Fri, Apr 12, 2013 at 2:52 PM, Ben Chobot be...@silentmedia.com wrote: On Apr 12, 2013, at 9:45 AM, Rodrigo Barboza wrote: Hi guys. I compiled my postrges server (9.1.4) with default segment size (16MB). Should it be enough? Should I increase this size in compilation? Unlike some default values in the configuration file, the compiled-in defaults work well for most people. Thanks!
Re: [PERFORM] Segment best size
On Friday, April 12, 2013, Rodrigo Barboza wrote: Hi guys. I compiled my postrges server (9.1.4) with default segment size (16MB). Should it be enough? Should I increase this size in compilation? To recommend that you deviate from the defaults, we would have to know something about your server, or how you intend to use it. But you haven't told us any of those things. Is there something about your intended use of the server that made you wonder about this setting in particular? Anyway, wal seg size is one of the last things I would worry about changing. Also, I suspect non-default settings of that parameter get almost zero testing (either alpha, beta, or in the field), and so would consider it mildly dangerous to deploy to production. Cheers, Jeff