Maybe Could be Check-Points? I have changed the parameters sometime ago.
Follow:
effective_cache_size = 200MB

checkpoint_segments = 40                # in logfile segments, min 1, 16MB
each
checkpoint_timeout = 3min               # range 30s-1h

#bgwriter_delay = 200ms                 # 10-10000ms between rounds
bgwriter_lru_percent = 16.0             # 0-100% of LRU buffers
scanned/round
bgwriter_lru_maxpages = 170             # 0-1000 buffers max written/round
bgwriter_all_percent = 7.0              # 0-100% of all buffers
scanned/round
bgwriter_all_maxpages = 400             # 0-1000 buffers max written/round

Now I'm having the following problem on vacuum:
vacuumdb: vacuuming of database "Postgres" failed: ERRO:  multiple active
vacuums for index "PartInd_replicacao_historico_sl_log_2-node-1".

But there's no anymore vacuum running.

2008/12/8 Scott Marlowe <[EMAIL PROTECTED]>

> On Mon, Dec 8, 2008 at 10:17 AM, Rafael Domiciano
> <[EMAIL PROTECTED]> wrote:
> > Here's the output. While the process was running my database get sometime
> > without doing anything.
> > You said that I probably get low numbers, but what numbers?
>
> We're looking for MB/s and the bi/bo fields in bmstat (blocks in / blocks
> out)
>
> > [EMAIL PROTECTED] banco]# time dd if=/dev/zero of=/banco/testfile
> bs=1000000
> > count=500
> > 500+0 records in
> > 500+0 records out
> > 500000000 bytes (500 MB) copied, 85,4485 seconds, 5,9 MB/s
> > real    1m25.451s
> > user    0m0.003s
> > sys     0m1.617s
> > [EMAIL PROTECTED] banco]# time dd of=/dev/null if=/banco/testfile
> > 976562+1 records in
> > 976562+1 records out
> > 500000000 bytes (500 MB) copied, 47,5025 seconds, 10,5 MB/s
> > real    0m47.543s
> > user    0m0.457s
> > sys     0m1.470s
> > And the vmstat output:
> >
> > procs -----------memory---------- ---swap-- -----io---- --system--
> > -----cpu------
> >  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
> id
> > wa st
> >  0  8 111760  56432  19812 1305692    0    0   744   832 2488  1518  5  3
> 46
> > 46  0
>
> Here, the bi/bo numbers are 768 in and 832 out.  That's k/second we're
> looking at.
>
> Lots of the same numbers cut out.
>
> > procs -----------memory---------- ---swap-- -----io---- --system--
> > -----cpu------
> >  r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy
> id
> > wa st
> >  0  6 111760  55452  18528 1296412    0    0    88 25040 1843  1436  2  3
> 34
> > 61  0
> >  1  7 111760  52352  18540 1296472    0    0    36 15220 1467   944  3  2
> 27
> > 68  0
> >  0  6 111760  54152  18552 1296568    0    0    88 42296 1609   822  1  2
> 38
> > 59  0
> >  1  8 111760  52412  18580 1296752    0    0   172 27052 1567   950  2  3
> 27
> > 69  0
> >  0  6 111724  53904  18632 1297008  320    0   580  1504 1902  2320  5  6
> 23
> > 66  0
> >  1  6 111724  54280  18660 1297200    4    0   200  1060 2014  1783  5  3
> 44
> > 48  0
> >  1  7 111676  51388  16340 1302348    8    0   156  1212 1684   848  2  2
> 53
> > 42  0
> >  1  6 111668  55040  14864 1301048    0    4   152 46328 1595  5108  1  5
> 40
> > 54  0
>
> Now we're showing that we can write to the disk at 25 to 42 Megs a
> second, not too bad.  But it looks really bursty, like it can sustain
> this throughput for only a few seconds.   Try writing a larger file
> and run vmstat 10 or 60 as well and see what the average over a longer
> time with a larger file is.  I get a feeling your machine has a hard
> time sustaining throughput for some reason.
>

Reply via email to