Hello.
As it turned out to be iowait, I'd recommend to try to load at least
some hot relations into FS cache with dd on startup. With a lot of RAM
on FreeBSD I even sometimes use this for long queries that require a lot
of index scans.
This converts random IO into sequential IO that is much m
If you are not doing so already, another approach to preventing the slam at
startup would be to implement some form of caching either in memcache or an
http accelerator such as varnish (https://www.varnish-cache.org/). Depending
on your application and the usage patterns, you might be able to fairl
On September 6, 2011 12:35:35 PM Richard Shaw wrote:
> Thanks for the advice, It's one under consideration at the moment. What
> are your thoughts on increasing RAM and shared_buffers?
>
If it's running OK after the startup rush, and it seems to be, I would leave
them alone. More RAM is always
Thanks for the advice, It's one under consideration at the moment. What are
your thoughts on increasing RAM and shared_buffers?
On 6 Sep 2011, at 20:21, Alan Hodgson wrote:
> On September 6, 2011 12:11:10 PM Richard Shaw wrote:
>> 24 :)
>>
>> 4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz]
>>
On September 6, 2011 12:11:10 PM Richard Shaw wrote:
> 24 :)
>
> 4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz]
>
Nice box.
Still I/O-bound, though. SSDs would help a lot, I would think.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subsc
24 :)
4 x Intel Xeon-NehalemEX E7540-HexCore [2GHz]
On 6 Sep 2011, at 20:07, Alan Hodgson wrote:
> On September 5, 2011 03:36:09 PM you wrote:
>> After Restart
>>
>> procs ---memory-- ---swap-- -io --system--
>> -cpu-- r b swpd free buff cache si so
On September 5, 2011 03:36:09 PM you wrote:
> After Restart
>
> procs ---memory-- ---swap-- -io --system--
> -cpu-- r b swpd free buff cache si sobibo in
> cs us sy id wa st 2 34 2332 5819012 75632 258553680089
> 420
On Monday 05 Sep 2011 22:23:32 Scott Marlowe wrote:
> On Mon, Sep 5, 2011 at 11:24 AM, Andres Freund wrote:
> > On Monday, September 05, 2011 14:57:43 Richard Shaw wrote:
> >> Autovacuum has been disabled and set to run manually via cron during a
> >> quiet period and fsync has recently been turne
/
OS and Postgres on same mount point
On 6 Sep 2011, at 00:31, Scott Marlowe wrote:
> On Mon, Sep 5, 2011 at 4:36 PM, Richard Shaw wrote:
>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
>> avgqu-sz await svctm %util
>> sda 1.00 143.00 523.50 108.0
On Mon, Sep 5, 2011 at 4:36 PM, Richard Shaw wrote:
> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
> avgqu-sz await svctm %util
> sda 1.00 143.00 523.50 108.00 8364.00 2008.00 16.42
> 2.78 4.41 1.56 98.35
> sda1 0.00 0
vmstat 1 and iostat -x output
Normal
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff cache si sobibo in cs us sy id wa st
3 0 2332 442428 73904 3128734400894200 7 5 85 3
0
4 1 2
On Mon, Sep 5, 2011 at 11:24 AM, Andres Freund wrote:
> On Monday, September 05, 2011 14:57:43 Richard Shaw wrote:
>> Autovacuum has been disabled and set to run manually via cron during a quiet
>> period and fsync has recently been turned off to gauge any real world
>> performance increase, there
On September 5, 2011, Richard Shaw wrote:
> Hi Andy,
>
> It's not a new issue no, It's a legacy system that is in no way ideal but
> is also not in a position to be overhauled. Indexes are correct, tables
> are up to 25 million rows.
>
> On startup, it hits CPU more than IO, I'll provide some a
On Monday, September 05, 2011 14:57:43 Richard Shaw wrote:
> Autovacuum has been disabled and set to run manually via cron during a quiet
> period and fsync has recently been turned off to gauge any real world
> performance increase, there is battery backup on the raid card providing
> some level o
On 09/05/2011 08:57 AM, Richard Shaw wrote:
Hi Andy,
It's not a new issue no, It's a legacy system that is in no way ideal but is
also not in a position to be overhauled. Indexes are correct, tables are up to
25 million rows.
On startup, it hits CPU more than IO, I'll provide some additiona
I think that wal_segments are too low, try 30.
2011/9/5, Andy Colson :
> On 09/05/2011 05:28 AM, Richard Shaw wrote:
>>
>> Hi,
>>
>> I have a database server that's part of a web stack and is experiencing
>> prolonged load average spikes of up to 400+ when the db is restarted and
>> first accessed
Hi Andy,
It's not a new issue no, It's a legacy system that is in no way ideal but is
also not in a position to be overhauled. Indexes are correct, tables are up to
25 million rows.
On startup, it hits CPU more than IO, I'll provide some additional stats after
I restart it tonight.
Serv
On 09/05/2011 05:28 AM, Richard Shaw wrote:
Hi,
I have a database server that's part of a web stack and is experiencing
prolonged load average spikes of up to 400+ when the db is restarted and first
accessed by the other parts of the stack and has generally poor performance on
even simple se
On 5/09/2011 6:55 PM, Richard Shaw wrote:
Hi Craig,
Apologies, I should have made that clearer, I am using PgBouncer 1.4.1 in front
of Postgres and included the config at the bottom of my original mail
Ah, I see. The point still stands: your hardware can *not* efficiently
do work for 1000 con
Hi Craig,
Apologies, I should have made that clearer, I am using PgBouncer 1.4.1 in front
of Postgres and included the config at the bottom of my original mail
Regards
Richard
.
On 5 Sep 2011, at 11:49, Craig Ringer wrote:
> On 5/09/2011 6:28 PM, Richard Shaw wrote:
>> max_connecti
On 5/09/2011 6:28 PM, Richard Shaw wrote:
max_connections| 1000
Woah! No wonder you have "stampeding herd" problems after a DB or server
restart and are having performance issues.
When you have 1000 clients trying to do work at once, they'll all be
fighting over memory, di
Hi,
I have a database server that's part of a web stack and is experiencing
prolonged load average spikes of up to 400+ when the db is restarted and first
accessed by the other parts of the stack and has generally poor performance on
even simple select queries.
There are 30 DBs in total on t
22 matches
Mail list logo