Rafael Domiciano escribió:
> 2008/12/9 Rafael Domiciano
>
> > Maybe Could be Check-Points? I have changed the parameters sometime ago.
> > Follow:
> > effective_cache_size = 200MB
> >
> > checkpoint_segments = 40# in logfile segments, min 1, 16MB
> > each
> > checkpoint_timeout =
I think that I resolved I my problems with vacuum.I started the autovacuum
and daily runs vacuum verbose analyze, and weekly Reindex.
I'm monitoring the database now.
But, my database continues slow.
Googling a while I started to read something about pdflush and dirty_ratio.
Executing the command
Maybe Could be Check-Points? I have changed the parameters sometime ago.
Follow:
effective_cache_size = 200MB
checkpoint_segments = 40# in logfile segments, min 1, 16MB
each
checkpoint_timeout = 3min # range 30s-1h
#bgwriter_delay = 200ms # 10-1ms
On Mon, Dec 8, 2008 at 10:17 AM, Rafael Domiciano
<[EMAIL PROTECTED]> wrote:
> Here's the output. While the process was running my database get sometime
> without doing anything.
> You said that I probably get low numbers, but what numbers?
We're looking for MB/s and the bi/bo fields in bmstat (bl
On Mon, Dec 8, 2008 at 9:29 AM, Rafael Domiciano
<[EMAIL PROTECTED]> wrote:
> 1 Drive Only. This server has no RAID.
> Do you think that the I/O is very high and I'm needing a RAID?!
Not necessarily. Like I said, my laptop currently is about 25 to 30
times faster writing to disk than your server.
1 Drive Only. This server has no RAID.Do you think that the I/O is very high
and I'm needing a RAID?!
2008/12/8 Scott Marlowe <[EMAIL PROTECTED]>
> On Mon, Dec 8, 2008 at 9:11 AM, Rafael Domiciano
> <[EMAIL PROTECTED]> wrote:
> > How I do to see my I/O subsystem? vmstat? If so, follow:
>
> No, I
On Mon, Dec 8, 2008 at 9:11 AM, Rafael Domiciano
<[EMAIL PROTECTED]> wrote:
> How I do to see my I/O subsystem? vmstat? If so, follow:
No, I mean, how many drives do you have, what kind of RAID controller,
if any, how they're configured, and so on. :) Sorry wasn't really
clear there was I?
The
No, I'm gonna to try
2008/12/8 Scott Marlowe <[EMAIL PROTECTED]>
> On Mon, Dec 8, 2008 at 6:04 AM, Rafael Domiciano
> <[EMAIL PROTECTED]> wrote:
> > Hello guys,
> > I tried to modify my vacuum routine, and started to only run vacuum
> verbose
> > analyze diary followed by a reindex weekly.
>
> Ha
How I do to see my I/O subsystem? vmstat? If so, follow:[EMAIL PROTECTED] ~]#
vmstat 1
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff cache si sobibo incs us sy id
wa st
0 13 127728 58020 19072 133207632
On Mon, Dec 8, 2008 at 6:04 AM, Rafael Domiciano
<[EMAIL PROTECTED]> wrote:
> Hello guys,
> I tried to modify my vacuum routine, and started to only run vacuum verbose
> analyze diary followed by a reindex weekly.
Have you tried running autovacuum with a naptime of 10 or 20?
--
Sent via pgsql-ad
On Mon, Dec 8, 2008 at 6:04 AM, Rafael Domiciano
<[EMAIL PROTECTED]> wrote:
> Hello guys,
> I tried to modify my vacuum routine, and started to only run vacuum verbose
> analyze diary followed by a reindex weekly.
> But I still having problems in my database. The uptime database is hard to
> stay b
Hello guys,
I tried to modify my vacuum routine, and started to only run vacuum verbose
analyze diary followed by a reindex weekly.
But I still having problems in my database. The uptime database is hard to
stay below 10.
I'm thinking that my hardware is not more good as it was sometime ago.
The m
On Wed, Nov 26, 2008 at 12:54 PM, Matthew T. O'Connor <[EMAIL PROTECTED]> wrote:
> Rafael Domiciano wrote:
>>
>> I'm not using autovacuum. Regular vacuum goes ok.
>> To see the last 10 lines of verbose i will need to run vacuum tonight
>>
>> If a run a reindex before the vacuum full, increase the "
Rafael Domiciano wrote:
I'm not using autovacuum. Regular vacuum goes ok.
To see the last 10 lines of verbose i will need to run vacuum tonight
If a run a reindex before the vacuum full, increase the "speed" of
doing vacuum? I found something about it googling.
It might help a bit, but by the
Rafael Domiciano wrote:
> The database has around 40 Gb.
>
> If I not use vacuum full everyday the database get very slow.
>
> There is no deadlock on the database.
> The vacuum does a clean in the table and in every index of the table
> "clifatura". And in the end of the vacuum, seems that vacu
On Wed, Nov 26, 2008 at 10:21 AM, Rafael Domiciano
<[EMAIL PROTECTED]> wrote:
> I'm not using autovacuum. Regular vacuum goes ok.
> To see the last 10 lines of verbose i will need to run vacuum tonight
> If a run a reindex before the vacuum full, increase the "speed" of doing
> vacuum? I found some
I'm not using autovacuum. Regular vacuum goes ok.To see the last 10 lines of
verbose i will need to run vacuum tonight
If a run a reindex before the vacuum full, increase the "speed" of doing
vacuum? I found something about it googling.
2008/11/26 Scott Marlowe <[EMAIL PROTECTED]>
> 2008/11/26 R
2008/11/26 Rafael Domiciano <[EMAIL PROTECTED]>:
> The database has around 40 Gb.
> If I not use vacuum full everyday the database get very slow.
>
> There is no deadlock on the database.
You didn't mention if you were using autovacuum or not. You also
didn't mention whether or not you'd tried r
The database has around 40 Gb.
If I not use vacuum full everyday the database get very slow.
There is no deadlock on the database.
The vacuum does a clean in the table and in every index of the table
"clifatura". And in the end of the vacuum, seems that vacuum is working hard
on the table (Vacuum
Rafael Domiciano wrote:
Hei
> I need some help or just some hints. I am having problems with vacuum
> full in one table only: "clifatura".
> That table has today around 7 million rows.
>
How big is the database?
> I scheduled on the server a cron job to run VACUUM FULL every day at 23
> P.M, b
Hi there,
I need some help or just some hints. I am having problems with vacuum full
in one table only: "clifatura".
That table has today around 7 million rows.
I scheduled on the server a cron job to run VACUUM FULL every day at 23 P.M,
but on the following day, at 8 A.M. vacuum is yet working on
On Wed, Oct 25, 2006 at 04:00:29PM -0400, Mike Goldner wrote:
> I just noticed that my nightly vacuum failed the previous three nights
> with the following error:
>
> [2032-jbossdb-postgres-2006-10-24 04:12:30.019 EDT]ERROR: failed to re-
> find parent key in "jms_messages_pkey"
> [2032-jbossdb-p
On Wed, Oct 25, 2006 at 03:54:17PM -0400, Mike Goldner wrote:
> Finally, and most important is the blocking. The vacuum duration
> reported in the log converts to about 170 minutes. I can track
> backwards in the log and the only messages prior to the 6:52am
> completion of the vacuum end at 3:57
Mike Goldner <[EMAIL PROTECTED]> writes:
> First of all, my max_fsm_pages is obviously way off. However, every
> time I increase my max_fsm_pages the next vacuum says that it requires
> more. Will there every be a plateau in the requested pages?
We realized recently that this can happen if you h
On Wed, 2006-10-25 at 15:54 -0400, Mike Goldner wrote:
> I have a nightly vacuum scheduled as follows:
>
> su - postgres -c "/usr/bin/vacuumdb --analyze --dbname=mydb"
>
> Last night, it appears that the vacuum blocked db access from my
> application server (JBoss). Here is the logfile snippet:
Mike Goldner wrote:
> I have a nightly vacuum scheduled as follows:
>
> su - postgres -c "/usr/bin/vacuumdb --analyze --dbname=mydb"
>
> Last night, it appears that the vacuum blocked db access from my
> application server (JBoss). Here is the logfile snippet:
>
> [3693-jbossdb-postgres-2006-10
I have a nightly vacuum scheduled as follows:
su - postgres -c "/usr/bin/vacuumdb --analyze --dbname=mydb"
Last night, it appears that the vacuum blocked db access from my
application server (JBoss). Here is the logfile snippet:
[3693-jbossdb-postgres-2006-10-25 06:52:29.488 EDT]NOTICE: number
"Martins Zarins" <[EMAIL PROTECTED]> writes:
> Some rows are up to 400Kb lenght. The database is
> working just fine, but vacuum crashes all the time after procesing
> that full text table.
You're going to have to provide more information than that. What
happens *exactly*? What shows up in th
hello, All !
I'm freshman to this list.
Before version 7.1 there was restriction on SQL tuple length 8-32Kb.
Now I'm using 7.1.2 where iz no such limitations. I have created
some documents database. Theare is table with full text of
documents. Some rows are up to 400Kb lenght. The database is
29 matches
Mail list logo