David Ondrejik wrote:
> I think I see a (my) fatal flaw that will cause the cluster to
> fail.
> Kevin Grittner stated:
>> If you have room for a second copy of your data, that is almost
>> always much faster, and less prone to problems.
>
> I looked at the sizes for the tables in the databas
nt: Thursday, July 21, 2011 2:12 PM
Subject: Re: [ADMIN] vacuumdb question/problem
I think I see a (my) fatal flaw that will cause the cluster to fail.
>> From the info I received from previous posts, I am going to change
>> my game plan. If anyone has thoughts as to different process
I think I see a (my) fatal flaw that will cause the cluster to fail.
From the info I received from previous posts, I am going to change
my game plan. If anyone has thoughts as to different process or
can confirm that I am on the right track, I would appreciate your
input.
1. I am going to ru
David Ondrejik wrote:
> The posting of data to the table in question is extremely
> slow...yesterday I saw that it took over 6 min to post just 124
> rows of data. That is just not acceptable. Additionally, we have
> about 9,000 to 11,000 products that come in daily (some contain
> one row of da
Thanks to everyone for their response and help. I still have some
more questions that hopefully someone can help me with as I have not yet
been able to solve my vacuumdb problem.
The posting of data to the table in question is extremely
slow...yesterday I saw that it took over 6 min to post
David Ondrejik wrote:
> So I had to kill my process, recover disk space and get the
> machine back in working condition for the weekend. I guess I will
> attempt to do the full vacuum again next week.
Why do you think that you need a VACUUM FULL? That is only needed
as an extreme measure in r
David Ondrejik writes:
> I am still wondering how the vacuum process actually works. When it
> throws the output lines that show how many rows are
> recoverable/nonremovable, does this mean that the vacuum has completed?
No, that's just the end of the first pass over the table. After that it
Tom,
It was not consuming any CPU time. I found that another process on the
machine failed. That process was trying to write to a different table
(not the one I was vacuuming) in the database and that table was locked
(not sure why). It produced thousands of errors which caused the log
file
Simon Riggs writes:
> On Fri, Jul 15, 2011 at 5:10 PM, David Ondrejik
> wrote:
>> Since then, the process has continued to run (for about 20 hrs) without any
>> additional information being returned.
> Probably locked behind another long running task that is holding a buffer pin.
That's possib
Simon - thanks for the response. I checked all the processes and
nothing appears to be holding it up. Any other advice?
Simon Riggs said the following on 7/15/2011 12:21 PM:
On Fri, Jul 15, 2011 at 5:10 PM, David Ondrejik wrote:
Since then, the process has continued to run (for about 20 hr
On Fri, Jul 15, 2011 at 5:10 PM, David Ondrejik wrote:
> Since then, the process has continued to run (for about 20 hrs) without any
> additional information being returned.
Probably locked behind another long running task that is holding a buffer pin.
--
Simon Riggs http://
Hello,
I am new to this list and hope I have chosen the appropriate group to
ask this question.
We are running version 8.2.6 of postgres and I am trying to run a full
vacuum on a single table in our database. I started the vacuum about 24
hours ago and it is still running. Within 2-3 hrs o
Hello,
I am new to this list and hope I have chosen the appropriate group to
ask this question.
We are running version 8.2.6 of postgres and I am trying to run a full
vacuum on a single table in our database. I started the vacuum about 24
hours ago and it is still running. Within 2-3 hrs o
Nice to know that, which means we can only send out scripts by cron...
On 8 September 2010 16:06, Fabrízio de Royes Mello
wrote:
>
>
> 2010/9/8 Bèrto ëd Sèra
>
> Hi!
>>
>> I would also expect you to be able to make a Stored Procedure executing
>> the same command, although I never tried it mysel
2010/9/8 Bèrto ëd Sèra
> Hi!
>
> I would also expect you to be able to make a Stored Procedure executing the
> same command, although I never tried it myself.
>
>
It is not possible... vacuum cannot be executed inside a function or
transaction.
See de sample:
-- Using function
CREATE OR REPLACE
Hi!
I would also expect you to be able to make a Stored Procedure executing the
same command, although I never tried it myself.
Bèrto
On 8 September 2010 03:17, Fabrízio de Royes Mello
wrote:
>
> 2010/9/7 Isabella Ghiurea
>
> Hi List,
>> I would like to know if there is an option to run full v
2010/9/7 Isabella Ghiurea
> Hi List,
> I would like to know if there is an option to run full vacuumdb for a
> specific schema only, I see there is option for tables or whole db .
>
>
No, but you can do like this using "psql" :
psql -U postgres -t -A -c "select 'VACUUM
'||table_schema||'.'||ta
Hi List,
I would like to know if there is an option to run full vacuumdb for a
specific schema only, I see there is option for tables or whole db .
Thank you
Isabella
--
---
Isabella A. Ghiurea
isabella.ghiu...@nrc-cnrc.gc.ca
Canadia
Carl Anderson writes:
> Running vacuumdb, with and without -f, I get output with final line:
> vacuumdb: vacuuming of database "Validation" failed: ERROR: failed to
> re-find parent key in index "pg_shdepend_reference_index" for deletion target
> page 380
You should be able to fix that with R
I have a DB in constant autovacuum waiting mode:
1 S postgres 26262 4617 0 80 0 - 88927 semtim 10:19 ?00:00:00
postgres: autovacuum worker process Validation waiting
Running vacuumdb, with and without -f, I get o
Oh..ok..then I guess I have to stick with vacuumdb -a (We are running 8.1.X )
Thank you
On Mon, Nov 2, 2009 at 6:52 PM, Alvaro Herrera
wrote:
> Anj Adu escribió:
>> And autovacuum will reset the XID counter even if it skips tables
>> right? Just wanted to confirm before enabling autovacuum.
>
>
Anj Adu escribió:
> And autovacuum will reset the XID counter even if it skips tables
> right? Just wanted to confirm before enabling autovacuum.
On 8.2 and up, yes.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Developm
And autovacuum will reset the XID counter even if it skips tables
right? Just wanted to confirm before enabling autovacuum.
Thanks
Sriram
On Mon, Nov 2, 2009 at 5:39 PM, Alvaro Herrera
wrote:
> Tomeh, Husam escribió:
>> How about if using autovacuum daemon instead?
>
> Autovacuum only processes
Tomeh, Husam escribió:
> How about if using autovacuum daemon instead?
Autovacuum only processes tables that need vacuuming, per the configured
parameters, so yes, it skips tables that were "recently" processed
(where "recently" is defined by said parameters and operations).
--
Alvaro Herrera
Subject: Re: [ADMIN] vacuumdb knowledge of prior vacuum
Anj Adu escribió:
> Does vacuumdb have knowledge of a VACUUM that was done on a table in
> the prior run and skip it the next time (assuming the table does not
> change) ?
No.
--
Alvaro Herrera
Anj Adu escribió:
> Does vacuumdb have knowledge of a VACUUM that was done on a table in
> the prior run and skip it the next time (assuming the table does not
> change) ?
No.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt,
Does vacuumdb have knowledge of a VACUUM that was done on a table in
the prior run and skip it the next time (assuming the table does not
change) ? If not.is autovacuum smart enough to figure that out ?
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your su
On Thu, 2009-10-15 at 09:24 -0300, Alvaro Herrera wrote:
> Simon Riggs escribió:
> > On Wed, 2009-10-14 at 13:57 -0300, Alvaro Herrera wrote:
> > > Anj Adu escribió:
> > >
> > > > I have several "daily" tables that get dropped every day..Is there a
> > > > wildcard that I can use to tell vacuumdb
Simon Riggs escribió:
> On Wed, 2009-10-14 at 13:57 -0300, Alvaro Herrera wrote:
> > Anj Adu escribió:
> >
> > > I have several "daily" tables that get dropped every day..Is there a
> > > wildcard that I can use to tell vacuumdb NOT to vacuum those
> > > tables...
> >
> > No. You need to do "INS
On Wed, 2009-10-14 at 13:57 -0300, Alvaro Herrera wrote:
> Anj Adu escribió:
>
> > I have several "daily" tables that get dropped every day..Is there a
> > wildcard that I can use to tell vacuumdb NOT to vacuum those
> > tables...
>
> No. You need to do "INSERT INTO pg_autovacuum" (or ALTER TABL
Anj Adu escribió:
> I have several "daily" tables that get dropped every day..Is there a
> wildcard that I can use to tell vacuumdb NOT to vacuum those
> tables...
No. You need to do "INSERT INTO pg_autovacuum" (or ALTER TABLE/SET in 8.4)
just after you've created the table.
--
Alvaro Herrera
Thanks
I have several "daily" tables that get dropped every day..Is there a
wildcard that I can use to tell vacuumdb NOT to vacuum those
tables...i.e my goal is to ensure that vacuumdb vacuums the entire
database (minus the daily tables that get dropped) so that the XID
wraparound value gets re
On Wed, Oct 14, 2009 at 1:34 AM, Simon Riggs wrote:
> On Tue, 2009-10-13 at 19:40 -0600, Scott Marlowe wrote:
>> On Tue, Oct 13, 2009 at 7:29 PM, Anj Adu wrote:
>> > I am running Postgres 8.1.9 on an 8 core Xeon 5430 box that is showing
>> > single digit CPU and IO utilization. the database size
On Tue, 2009-10-13 at 19:40 -0600, Scott Marlowe wrote:
> On Tue, Oct 13, 2009 at 7:29 PM, Anj Adu wrote:
> > I am running Postgres 8.1.9 on an 8 core Xeon 5430 box that is showing
> > single digit CPU and IO utilization. the database size is 820G .
> > Vacuum_cost_delay=0 and maintenance_mem = 90
On Tue, Oct 13, 2009 at 7:29 PM, Anj Adu wrote:
> I am running Postgres 8.1.9 on an 8 core Xeon 5430 box that is showing
> single digit CPU and IO utilization. the database size is 820G .
> Vacuum_cost_delay=0 and maintenance_mem = 900M
>
> Is there an option to vacuumdb or a way to make it run pa
I am running Postgres 8.1.9 on an 8 core Xeon 5430 box that is showing
single digit CPU and IO utilization. the database size is 820G .
Vacuum_cost_delay=0 and maintenance_mem = 900M
Is there an option to vacuumdb or a way to make it run parallel threads.
--
Sent via pgsql-admin mailing list (pg
Decibel! <[EMAIL PROTECTED]> a écrit :
On Jun 21, 2008, at 8:47 AM, [EMAIL PROTECTED] wrote:
i use postgresql version7.4.7 on i386-pc-linux-gnu, autovacuum is
configured to run on this database.
But recently, we lost data for a database, we were able to connect
the database but we couldn't
Tom Lane <[EMAIL PROTECTED]> a écrit :
[EMAIL PROTECTED] writes:
i use postgresql version7.4.7 on i386-pc-linux-gnu, autovacuum is
configured to run on this database.
Hmm ... in theory autovacuum should have kept you out of trouble,
if it was working properly. Were you keeping an eye on its
On Jun 21, 2008, at 8:47 AM, [EMAIL PROTECTED] wrote:
i use postgresql version7.4.7 on i386-pc-linux-gnu, autovacuum is
configured to run on this database.
But recently, we lost data for a database, we were able to connect
the database but we couldn't see any table anymore.
I suspected a trans
[EMAIL PROTECTED] writes:
> i use postgresql version7.4.7 on i386-pc-linux-gnu, autovacuum is
> configured to run on this database.
Hmm ... in theory autovacuum should have kept you out of trouble,
if it was working properly. Were you keeping an eye on its log
output?
> Doing a manual vacuumdb
Hi,
i use postgresql version7.4.7 on i386-pc-linux-gnu, autovacuum is
configured to run on this database.
But recently, we lost data for a database, we were able to connect the
database but we couldn't see any table anymore.
I suspected a transaction ID wraparound, and to fix it, i just
imp
On Fri, Apr 25, 2008 at 06:03:12PM -0400, Bhella Paramjeet-PFCW67 wrote:
> No database is not sitting on NFS storage. We are using emc storage and
> the file system is fibre attached to storage.
What's the filesystem? Are you sure you don't have any bad memory in
the box? I'm suspicious of the
7, 2008 5:18 PM
> To: Bhella Paramjeet-PFCW67
> Cc: pgsql-admin@postgresql.org; Subbiah Stalin-XCGF84
> Subject: Re: [ADMIN] Vacuumdb error
>
> "Bhella Paramjeet-PFCW67" <[EMAIL PROTECTED]> writes:
> > We have our production postgres 8.0.10 database running on li
ssage-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Thursday, April 17, 2008 5:18 PM
> To: Bhella Paramjeet-PFCW67
> Cc: pgsql-admin@postgresql.org; Subbiah Stalin-XCGF84
> Subject: Re: [ADMIN] Vacuumdb error
>
> "Bhella Paramjeet-PFCW67" <[EMAIL PROTECTED
"Bhella Paramjeet-PFCW67" <[EMAIL PROTECTED]> writes:
> Error in the database vacuum log.
> INFO: vacuuming "public.securityevent"
> WARNING: relation "securityevent" TID 21/3: OID is invalid
That smells like a data corruption problem ...
> vacuumdb: vacuuming of database "ectest" failed: ERROR
2008 5:18 PM
To: Bhella Paramjeet-PFCW67
Cc: pgsql-admin@postgresql.org; Subbiah Stalin-XCGF84
Subject: Re: [ADMIN] Vacuumdb error
"Bhella Paramjeet-PFCW67" <[EMAIL PROTECTED]> writes:
> We have our production postgres 8.0.10 database running on linux
> x86_64 machine. Rec
"Bhella Paramjeet-PFCW67" <[EMAIL PROTECTED]> writes:
> We have our production postgres 8.0.10 database running on linux x86_64
> machine. Recently we have started getting an error from one of our
> database while running vacuumdb. We are not getting this error during
> backups just only during vac
Hi,
We have our production postgres 8.0.10 database running on linux x86_64
machine. Recently we have started getting an error from one of our
database while running vacuumdb. We are not getting this error during
backups just only during vacuuming of a database. Can anyone please help
us figure ou
Hi!
I want to vacuum analyze a number of tables in a given database (on
port 5432) from a shell script. There is no need to process the whole
database.
Using
vacuumdb -d mydb -t mytab1 -t mytab2 -t mytab3 -fzp 5432;
silently operates on the last tablename given only. 'mytab3' in the
example ab
Juliann Meyer <[EMAIL PROTECTED]> writes:
>vacuumdb: vacuuming of database "adb_ob72orn" failed: ERROR: could not
>read block 18658 of relation "pecrsep_time_ind": Input/output error
This says read() failed with errno EIO, ie the operating system reported
a hardware failure while trying t
Have 14 systems scattered across the country where... hardware is
identical, all run linux OS RHE4.0 with postgresql v7.4.8.
The hardware is about 4 years old and is scheduled for replacement
sometime next year.
The database structure is identical on each system, the difference is in
the data th
Bruno Wolff III wrote:
> On Wed, Aug 09, 2006 at 11:24:03 -0700,
> Joel Stevenson <[EMAIL PROTECTED]> wrote:
> > I have a database that includes both highly transactional tables and
> > archive tables - OLTP and OLAP mixed together. Some of the archival
> > tables, which only experience insert
On Wed, Aug 09, 2006 at 11:24:03 -0700,
Joel Stevenson <[EMAIL PROTECTED]> wrote:
> I have a database that includes both highly transactional tables and
> archive tables - OLTP and OLAP mixed together. Some of the archival
> tables, which only experience inserts and reads, not updates or
> de
On Thu, 2006-08-10 at 09:44, Joel Stevenson wrote:
> Thanks for the reply. I'm still playing around a bit with my
> autovacuum settings and the manual vaccum runs I'm making are part of
> an effort to better understand the just what sort of settings I need
> - I'm running vacuumdb with verbose
Thanks for the reply. I'm still playing around a bit with my
autovacuum settings and the manual vaccum runs I'm making are part of
an effort to better understand the just what sort of settings I need
- I'm running vacuumdb with verbose output, etc.
Since I know that these archive tables are a
On Wed, 2006-08-09 at 23:01, adey wrote:
> Does autovacuum replace the need for a FULL vacuum please (to recover
> free space, etc)?
In most cases, yes. The idea is that autovacuum should put the database
into a "steady state" where there is some % of each table that is free
space and being recyc
Does autovacuum replace the need for a FULL vacuum please (to recover free space, etc)?
On 8/10/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Wed, 2006-08-09 at 13:24, Joel Stevenson wrote:> Hi,>> I have a database that includes both highly transactional tables and
> archive tables - OLTP and OLA
On Wed, 2006-08-09 at 13:24, Joel Stevenson wrote:
> Hi,
>
> I have a database that includes both highly transactional tables and
> archive tables - OLTP and OLAP mixed together. Some of the archival
> tables, which only experience inserts and reads, not updates or
> deletes, contain many mill
Hi,
I have a database that includes both highly transactional tables and
archive tables - OLTP and OLAP mixed together. Some of the archival
tables, which only experience inserts and reads, not updates or
deletes, contain many millions of rows and so they take a *long* time
to vacuum. Is th
On Sat, May 13, 2006 at 03:47:00AM -0500, Thomas F. O'Connell wrote:
>
> On May 13, 2006, at 12:35 AM, Tom Lane wrote:
>
> >VACUUM FULL does all right at packing the table (except in
> >pathological
> >cases, eg a very large tuple near the end of the table). It mostly
> >bites as far as shrink
On May 13, 2006, at 12:35 AM, Tom Lane wrote:
VACUUM FULL does all right at packing the table (except in
pathological
cases, eg a very large tuple near the end of the table). It mostly
bites as far as shrinking indexes goes, however. If you've got a
serious index bloat problem then REINDEX
"Thomas F. O'Connell" <[EMAIL PROTECTED]> writes:
> Shortly after I kicked it off, I watched the number of connections
> trend upward as a result of the aggressive locking of FULL. I didn't
> want to let this continue without notifying the developers about a
> potential downtime for their app
Tonight as part of a scheduled maintenance operation, I was going to
perform a VACUUM FULL ANALYZE on a postgres 8.1.3 instance that had
just had it's FSM settings increased to account for about 2 years'
worth of growth (particularly in number of relations).
Shortly after I kicked it off, I
Hmmm, AFAIK no, but you can get the age of the databases, and guess if you need to issue a vacuum or not.quote from the documentation (www.postgresql.org -> documentation):
SELECT datname, age(datfrozenxid) FROM pg_database; The age column measures the number of transactions from the cutof
On Mon, Feb 13, 2006 at 01:42:06PM -0300, Juliano wrote:
> Hi all,
>
> There is a way to see vacuumdb last running ?
If you mean you'd like to know when a vacuum was last run on a table,
sadly, there is no way. Though if you're using 8.1 and have autovacuum
enabled there may be some way to see wh
Hi all,
There is a way to see vacuumdb last running ?
tks-- Juliano
On Wed, Aug 17, 2005 at 02:15:42PM -0400, D Kavan wrote:
>
> 'du -h' in the base directory.
>
> >How are you finding out the DB size?
You are considering the difference in XLog segment size, right? The
pg_xlog directory may start almost empty and then start filling. The
space will not be recov
'du -h' in the base directory.
How are you finding out the DB size?
G.-
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
= 2147483648
> > #kernel.shmmax = 1073741824
> > kernel.shmmax = 6979321856
> > kernel.shmmni = 4096
> > kernel.sem = 250 32000 100 128
> > fs.file-max = 65536
> > net.ipv4.ip_local_port_range = 1024 65000
> > vm.overcommit_memory = 2
> >
>
> > #kernel.shmall = 2097152
> > #kernel.shmmax = 2147483648
> > #kernel.shmmax = 1073741824
> > kernel.shmmax = 6979321856
> > kernel.shmmni = 4096
> > kernel.sem = 250 32000 100 128
> > fs.file-max = 65536
> > net.ipv4.ip_local_port_range = 1024 65000
OTECTED]>>To: "D Kavan" <
[EMAIL PROTECTED]>>CC: pgsql-admin@postgresql.org>Subject: Re: [ADMIN] vacuumdb -a -f Date: Mon, 15 Aug 2005 21:31:01 -0400
>>"D Kavan" <[EMAIL PROTECTED]> writes:> > Even though I run vacuumdb -a -f every night with
om Lane <[EMAIL PROTECTED]>
To: "D Kavan" <[EMAIL PROTECTED]>
CC: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] vacuumdb -a -f Date: Mon, 15 Aug 2005 21:31:01 -0400
"D Kavan" <[EMAIL PROTECTED]> writes:
> Even though I run vacuumdb -a -f every night with no
L PROTECTED]>
CC: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] vacuumdb -a -f Date: Mon, 15 Aug 2005 21:31:01 -0400
"D Kavan" <[EMAIL PROTECTED]> writes:
> Even though I run vacuumdb -a -f every night with no exceptions or
problems,
> my database size remains 5.6 GB. After I do a
"D Kavan" <[EMAIL PROTECTED]> writes:
> Even though I run vacuumdb -a -f every night with no exceptions or problems,
> my database size remains 5.6 GB. After I do a dump/restore, the new
> database size is 4.0 GB. How could that be possible?
The extra 1.6GB probably represents the amount of ju
Hello,
Even though I run vacuumdb -a -f every night with no exceptions or problems,
my database size remains 5.6 GB. After I do a dump/restore, the new
database size is 4.0 GB. How could that be possible? That's a significant
amount of space. If I can clean up my existing databases by 30%
[EMAIL PROTECTED] ("Kevin Copley") writes:
> Hi,
>
> I've just put a system into production in which some tables are updated
> frequently - several times per
> second.
>
> I'm doing a nightly vacuumdb -v, but am not sure if it's achieving anything.
> Here's the output for one
> table:
>
>
>
> -
Hi,
I've just put a system into production in which
some tables are updated frequently - several times per second.
I'm doing a nightly vacuumdb -v, but am not sure if
it's achieving anything. Here's the output for one table:
--
Hi Everybody,
If i do vacuumdb -z daily as a part of my
maintanence, do i have to still vacuum analyze each table in the
database ?. The reason i am asking is that even though i do vacuumdb
daily my stats are not updated properly in pg_class view and the
reltuples values is much higher than
Marcello Perathoner <[EMAIL PROTECTED]> writes:
> I get this error when I try to analyze a column containing md5 hashes.
> The data type of the column is: bytea. The database encoding is UNICODE.
> Does anybody know a workaround for this?
> PostgreSQL 7.3.3 on i686-redhat-linux-gnu, compiled by
I get this error when I try to analyze a column containing md5 hashes.
The data type of the column is: bytea. The database encoding is UNICODE.
Does anybody know a workaround for this?
Thanks.
$ vacuumdb --analyze --table 'files(md5hash)'
ERROR: Invalid UNICODE character sequence found (0xdb51)
v
I'm using PostgreSQL 7.0.3.
I'm doing vacuumdb in my database every night to repair it. But now I have a
problem. I'm looking in /var/lib/pgsql/data/base/ for the tables length and
the primary keys of the tables are not cleaned and is increasing evey day.
What can I do to clean it?
Thank you for
> I'm using PostgreSQL 7.0.3.
> I'm doing vacuumdb in my database every night to repair it. But now I have a
> problem. I'm looking in /var/lib/pgsql/data/base/ for the tables length and
> the primary keys of the tables are not cleaned and is increasing evey day.
> What can I do to clean it?
> Than
I'm using PostgreSQL 7.0.3.
I'm doing vacuumdb in my database every night to repair it. But now I have a
problem. I'm looking in /var/lib/pgsql/data/base/ for the tables length and
the primary keys of the tables are not cleaned and is increasing evey day.
What can I do to clean it?
Thank you for th
83 matches
Mail list logo