"Thorne, Francis" writes:
> Last night I got the error
> Error Relation 41036649 deleted while still in use
This is not particularly surprising in 8.1 --- it has some race
conditions that can result in that type of error if vacuum (or anything
else) tries to open a table just as something else is
Hi all,
Any help on the following would be greatly appreciated, every evening
most of the data on our postgres 8.1 install is deleted and then a new
set of data is imported into the database (around 100 million row).
After this takes place we run a Vacuum Analyse on the whole database.
Last nigh
"Paul B. Anderson" <[EMAIL PROTECTED]> writes:
> I have a cluster of machines and the databases are on shared disk
> storage. I'm just getting this arrangement working and, while I've been
> debugging my scripts, I've accidentally had two copies of postgresql
> running against the same initdb d
Here are a couple of new data points on this issue.
Deleting all records in pg_statistic and then reindexing clears the
problem but I've had the problem in two of my own databases in two
separate postgresql instances as well as the postgres database in both
instances.
I have a cluster of ma
andy <[EMAIL PROTECTED]> writes:
> So I'm ok, but I tried it again, by dropping the database and re-running
> both scripts and got the same error again. So thought I'd offer a test
> case if there was interest.
Absolutely. I've seen just enough of these reports to make me think
there's an unde
Tom Lane wrote:
"Paul B. Anderson" <[EMAIL PROTECTED]> writes:
I did delete exactly one of each of these using ctid and the query then
shows no duplicates. But, the problem comes right back in the next
database-wide vacuum.
That's pretty odd --- I'm inclined to suspect index corruption.
I
I removed the duplicates and then immediately reindexed. All is
well. The vacuum analyze on the postgres database works now too.
Thanks.
It is good to know the pg_statistic table can be emptied in case this
ever happens again.
Paul
Tom Lane wrote:
"Paul B. Anderson" <[EMAIL PROTECTED]>
"Paul B. Anderson" <[EMAIL PROTECTED]> writes:
> I did delete exactly one of each of these using ctid and the query then
> shows no duplicates. But, the problem comes right back in the next
> database-wide vacuum.
That's pretty odd --- I'm inclined to suspect index corruption.
> I also tried r
I'm running postgreSQL 8.1.4 on Red Hat Enterprise Linux 3.
Things have been working well for a while but in the last few days, I've
gotten the following error during a nightly vacuum.
postgres=# vacuum analyze;
ERROR: duplicate key violates unique constraint
"pg_statistic_relid_att_index"
07/03/2006 16:34
Veuillez répondre à
<[EMAIL PROTECTED]>
A
<[EMAIL PROTECTED]>
cc
Objet
Re: [ADMIN] VACUUM Error?
If you do "ps auxwww|grep
postgres" on your console command line - you should find processes
with a status of "IDLE IN TRANSACTION" or simila
again for your help!
I will give you a feedback!
Best regard
Fabrice
"Andy Shellam"
<[EMAIL PROTECTED]>
Envoyé par : [EMAIL PROTECTED]
07/03/2006 16:34
Veuillez répondre à
<[EMAIL PROTECTED]>
A
<[EMAIL PROTECTED]>
cc
Objet
Re: [ADMIN] VACUUM
e à
<[EMAIL PROTECTED]>
A
<[EMAIL PROTECTED]>
cc
Objet
Re: [ADMIN] VACUUM Error?
If you do "ps auxwww|grep
postgres" on your console command line - you should find processes
with a status of "IDLE IN TRANSACTION" or similar, and use that
data and the
connection.
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]Sent: Tuesday, 07 March, 2006
3:25 pmTo: pgsql-admin@postgresql.orgSubject: Re: [ADMIN]
VACUUM Error?
Hello Tom Lane, Thank you very much for your answer!!
My PG version is older than 7.3 , I know it
so
t;[EMAIL PROTECTED]>
07/03/2006 15:52
A
[EMAIL PROTECTED]
cc
pgsql-admin@postgresql.org
Objet
Re: [ADMIN] VACUUM Error?
[EMAIL PROTECTED] writes:
> But I had error :"ERROR: Parent tuple was not found
What PG version is this? We recently fixed some bugs that could lead
[EMAIL PROTECTED] writes:
> But I had error :"ERROR: Parent tuple was not found
What PG version is this? We recently fixed some bugs that could lead to
this error.
The error could only occur if you have some old open transaction(s) that
could possibly still see since-updated tuples in the vacuu
Hello,
I executed a Vacuum on my Postgres database.
You can see the command line : /vacuumdb
-f -z -v -d ccm > /tmp/vacuum.txt 2> /tmp/vacuumError.txt I
used performed the vacuum.
But I had error :"ERROR: Parent
tuple was not found
vacuumdb: vacuum ccm failed"
each time I am executed the
"Zhang, Anna" <[EMAIL PROTECTED]> writes:
> ERROR: RelationBuildTriggers: 2 record(s) not found for rel domain.
> I deleted triggers that referenced domain before vacuum, is this the cause?
If you deleted them via "delete from pg_trigger" and not by DROP
TRIGGER, then yes.
You'll need to manual
On Fri, 8 Mar 2002, Zhang, Anna wrote:
> Thanks Stephan! your suggestion works. I just wonder that if dropping
> triggers caused such problem, this might be a postgres bug.
Did you drop them or did you delete from pg_trigger? AFAIK this only
happens with the latter (which isn't meant as normal
ECTED]'
Subject: Re: [ADMIN] vacuum error
On Fri, 8 Mar 2002, Zhang, Anna wrote:
> Hi,
> I got an error when vacuum database, see below:
>
> $vacuumdb xdap
> ERROR: RelationBuildTriggers: 2 record(s) not found for rel domain.
>
> I deleted triggers that referenced domain b
On Fri, 8 Mar 2002, Zhang, Anna wrote:
> Hi,
> I got an error when vacuum database, see below:
>
> $vacuumdb xdap
> ERROR: RelationBuildTriggers: 2 record(s) not found for rel domain.
>
> I deleted triggers that referenced domain before vacuum, is this the cause?
> How can I fix it? or It doesn'
Hi,
I got an error when vacuum database, see below:
$vacuumdb xdap
ERROR: RelationBuildTriggers: 2 record(s) not found for rel domain.
I deleted triggers that referenced domain before vacuum, is this the cause?
How can I fix it? or It doesn't bother, It's ok to ignore such error?
Thanks in adva
brian <[EMAIL PROTECTED]> writes:
> I have a small database with a few tables and a view. When I issue the
> vacuum command, I get:
> ERROR: cannot read block 5 of pg_description_objoid_index: Input/output
> error
Ugh, sounds like a disk hardware problem :-(. Better think about new
drives, or
: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of brian
Sent: Friday, June 15, 2001 12:37 PM
To: [EMAIL PROTECTED]
Subject: [ADMIN] vacuum error
Friends,
Help...
I have a small database with a few tables and a view. When I issue the
vacuum command, I get:
ERROR: cannot read block 5 of
Friends,
Help...
I have a small database with a few tables and a view. When I issue the
vacuum command, I get:
ERROR: cannot read block 5 of pg_description_objoid_index: Input/output
error
Same thing happens when I try pg_dump or pg_dumpall as well. With
pg_dump, I can dump individual tables
Hi. I'm having the following error when i try to vacuum a table:
horde=# vacuum verbose analyze active_sessions;
NOTICE: --Relation active_sessions--
NOTICE: Pages 144114: Changed 1, reaped 144113, Empty 0, New 0; Tup 3078:
Vac 564305, Keep/VTL 0/0, Crash 0, UnUsed 1046335, MinLen 1
I get the following when I do a vacuum verbose
NOTICE: CreatePortal: portal already exists
NOTICE: CreatePortal: portal already exists
When I look at the tables it is reading, it is only doing system tables and not any of
my tables.
Any ideas on what might cause this? I am running 6.4 on S
26 matches
Mail list logo