On 16 Listopad 2011, 18:31, Cody Caughlan wrote:
>
> On Nov 16, 2011, at 8:52 AM, Tomas Vondra wrote:
>
>> On 16 Listopad 2011, 2:21, Cody Caughlan wrote:
>>> How did you build your RAID array? Maybe I have a fundamental flaw /
>>> misconfiguration. I am doing it via:
>>>
>>> $ yes | mdadm --create
On Nov 16, 2011, at 8:52 AM, Tomas Vondra wrote:
> On 16 Listopad 2011, 2:21, Cody Caughlan wrote:
>> How did you build your RAID array? Maybe I have a fundamental flaw /
>> misconfiguration. I am doing it via:
>>
>> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4
>> /dev/xvdb
On 16 Listopad 2011, 2:21, Cody Caughlan wrote:
> How did you build your RAID array? Maybe I have a fundamental flaw /
> misconfiguration. I am doing it via:
>
> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4
> /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
> $ pvcreate /dev/md0
> $ vgc
On 16 Listopad 2011, 5:27, Greg Smith wrote:
> On 11/14/2011 01:16 PM, Cody Caughlan wrote:
>> We're starting to see some slow queries, especially COMMITs that are
>> happening more frequently. The slow queries are against seemingly
>> well-indexed tables.
>> Slow commits like:
>>
>> 2011-11-14 17:
On 11/14/2011 01:16 PM, Cody Caughlan wrote:
We're starting to see some slow queries, especially COMMITs that are
happening more frequently. The slow queries are against seemingly
well-indexed tables.
Slow commits like:
2011-11-14 17:47:11 UTC pid:14366 (44/0-0) LOG: duration: 3062.784 ms
sta
On 16 Listopad 2011, 2:21, Cody Caughlan wrote:
> How did you build your RAID array? Maybe I have a fundamental flaw /
> misconfiguration. I am doing it via:
>
> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4
> /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
> $ pvcreate /dev/md0
> $ vgc
On Tue, Nov 15, 2011 at 5:16 PM, Tomas Vondra wrote:
> Dne 14.11.2011 22:58, Cody Caughlan napsal(a):
>> I ran bonnie++ on a slave node, doing active streaming replication but
>> otherwise idle:
>> http://batch-files-test.s3.amazonaws.com/sql03.prod.html
>>
>> bonnie++ on the master node:
>> http:
Dne 14.11.2011 22:58, Cody Caughlan napsal(a):
> I ran bonnie++ on a slave node, doing active streaming replication but
> otherwise idle:
> http://batch-files-test.s3.amazonaws.com/sql03.prod.html
>
> bonnie++ on the master node:
> http://batch-files-test.s3.amazonaws.com/sql01.prod.html
>
> If I
Dne 15.11.2011 01:13, Cody Caughlan napsal(a):
> The first two are what I would think would be largely read operations
> (certainly the SELECT) so its not clear why a SELECT consumes write
> time.
>
> Here is the output of some pg_stat_bgwriter stats from the last couple of
> hours:
>
> https://
On Mon, Nov 14, 2011 at 2:57 PM, Tomas Vondra wrote:
> On 14 Listopad 2011, 22:58, Cody Caughlan wrote:
>>> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is
>>> usually too low).
>>
>> Ok, will do.
>
> Yes, but find out what that means and think about the possible impact
>
On 14 Listopad 2011, 22:58, Cody Caughlan wrote:
>> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is
>> usually too low).
>
> Ok, will do.
Yes, but find out what that means and think about the possible impact
first. It usually improves the checkpoint behaviour but increases
Thanks for your response. Please see below for answers to your questions.
On Mon, Nov 14, 2011 at 11:22 AM, Tomas Vondra wrote:
> On 14 Listopad 2011, 19:16, Cody Caughlan wrote:
>> shared_buffers = 3584MB
>> wal_buffers = 16MB
>> checkpoint_segments = 32
>> max_wal_senders = 10
>> checkpoint_com
On 14 Listopad 2011, 19:16, Cody Caughlan wrote:
> shared_buffers = 3584MB
> wal_buffers = 16MB
> checkpoint_segments = 32
> max_wal_senders = 10
> checkpoint_completion_target = 0.9
> wal_keep_segments = 1024
> maintenance_work_mem = 256MB
> work_mem = 88MB
> shared_buffers = 3584MB
> effective_ca
Hi, running Postgres 9.1.1 on an EC2 m1.xlarge instance. Machine is a
dedicated master with 2 streaming replication nodes.
The machine has 16GB of RAM and 4 cores.
We're starting to see some slow queries, especially COMMITs that are
happening more frequently. The slow queries are against seemingl
14 matches
Mail list logo