I think that I resolved I my problems with vacuum.I started the autovacuum
and daily runs vacuum verbose analyze, and weekly Reindex.
I'm monitoring the database now.

But, my database continues slow.
Googling a while I started to read something about pdflush and dirty_ratio.
Executing the command (while [ 1 ]; do cat /proc/meminfo | grep Dirty; sleep
1; done) I get:
Dirty:            8912 kB
Dirty:            5092 kB
Dirty:            5912 kB
Dirty:            6024 kB
Dirty:            6472 kB

With the time the Dirty Values increase and then my server does not do any
query, or response, even "startup" stay on queue. So, Dirty values goes down
and the server continues to proccess the queries and give response:
Dirty:           50024 kB
Dirty:           50684 kB
Dirty:           49888 kB
Dirty:           44884 kB
Dirty:           16888 kB
Dirty:           8884 kB

I set the dirty_ratio = 30 and the dirty_background_ratio = 1.

I don't know what more to do to solve this slow down problem.

Somebody have any ideia?

Thnks,

Rafael Domiciano

2008/12/9 Rafael Domiciano <rafael.domici...@gmail.com>

> Maybe Could be Check-Points? I have changed the parameters sometime ago.
> Follow:
> effective_cache_size = 200MB
>
> checkpoint_segments = 40                # in logfile segments, min 1, 16MB
> each
> checkpoint_timeout = 3min               # range 30s-1h
>
> #bgwriter_delay = 200ms                 # 10-10000ms between rounds
> bgwriter_lru_percent = 16.0             # 0-100% of LRU buffers
> scanned/round
> bgwriter_lru_maxpages = 170             # 0-1000 buffers max written/round
> bgwriter_all_percent = 7.0              # 0-100% of all buffers
> scanned/round
> bgwriter_all_maxpages = 400             # 0-1000 buffers max written/round
>
> Now I'm having the following problem on vacuum:
> vacuumdb: vacuuming of database "Postgres" failed: ERRO:  multiple active
> vacuums for index "PartInd_replicacao_historico_sl_log_2-node-1".
>
> But there's no anymore vacuum running.
>
> 2008/12/8 Scott Marlowe <scott.marl...@gmail.com>
>
>> On Mon, Dec 8, 2008 at 10:17 AM, Rafael Domiciano
>>
>> <rafael.domici...@gmail.com> wrote:
>> > Here's the output. While the process was running my database get
>> sometime
>> > without doing anything.
>> > You said that I probably get low numbers, but what numbers?
>>
>> We're looking for MB/s and the bi/bo fields in bmstat (blocks in / blocks
>> out)
>>
>> > [r...@postgres banco]# time dd if=/dev/zero of=/banco/testfile
>> bs=1000000
>> > count=500
>> > 500+0 records in
>> > 500+0 records out
>> > 500000000 bytes (500 MB) copied, 85,4485 seconds, 5,9 MB/s
>> > real    1m25.451s
>> > user    0m0.003s
>> > sys     0m1.617s
>> > [r...@postgres banco]# time dd of=/dev/null if=/banco/testfile
>> > 976562+1 records in
>> > 976562+1 records out
>> > 500000000 bytes (500 MB) copied, 47,5025 seconds, 10,5 MB/s
>> > real    0m47.543s
>> > user    0m0.457s
>> > sys     0m1.470s
>> > And the vmstat output:
>> >
>> > procs -----------memory---------- ---swap-- -----io---- --system--
>> > -----cpu------
>> >  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
>> id
>> > wa st
>> >  0  8 111760  56432  19812 1305692    0    0   744   832 2488  1518  5
>>  3 46
>> > 46  0
>>
>> Here, the bi/bo numbers are 768 in and 832 out.  That's k/second we're
>> looking at.
>>
>> Lots of the same numbers cut out.
>>
>> > procs -----------memory---------- ---swap-- -----io---- --system--
>> > -----cpu------
>> >  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
>> id
>> > wa st
>> >  0  6 111760  55452  18528 1296412    0    0    88 25040 1843  1436  2
>>  3 34
>> > 61  0
>> >  1  7 111760  52352  18540 1296472    0    0    36 15220 1467   944  3
>>  2 27
>> > 68  0
>> >  0  6 111760  54152  18552 1296568    0    0    88 42296 1609   822  1
>>  2 38
>> > 59  0
>> >  1  8 111760  52412  18580 1296752    0    0   172 27052 1567   950  2
>>  3 27
>> > 69  0
>> >  0  6 111724  53904  18632 1297008  320    0   580  1504 1902  2320  5
>>  6 23
>> > 66  0
>> >  1  6 111724  54280  18660 1297200    4    0   200  1060 2014  1783  5
>>  3 44
>> > 48  0
>> >  1  7 111676  51388  16340 1302348    8    0   156  1212 1684   848  2
>>  2 53
>> > 42  0
>> >  1  6 111668  55040  14864 1301048    0    4   152 46328 1595  5108  1
>>  5 40
>> > 54  0
>>
>> Now we're showing that we can write to the disk at 25 to 42 Megs a
>> second, not too bad.  But it looks really bursty, like it can sustain
>> this throughput for only a few seconds.   Try writing a larger file
>> and run vmstat 10 or 60 as well and see what the average over a longer
>> time with a larger file is.  I get a feeling your machine has a hard
>> time sustaining throughput for some reason.
>>
>
>

Reply via email to