corruption, right?
On 12/18/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Edoardo Ceccarelli" <[EMAIL PROTECTED]> writes:
> pg_dump: ERROR: could not access status of transaction 1629514106
> DETAIL: could not open file "pg_clog/0612": No such file or direct
s, once that is
done you can now try going through the pg_dump process again.
This suggestion might help you as well -->
http://archives.postgresql.org/pgsql-admin/2003-02/msg00263.php
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/18/06, Edoardo Ceccarelli <[EMAIL PROTECTED]> wrote:
sorry, forgot to mention: PG 8.1.5
On 12/18/06, Edoardo Ceccarelli <[EMAIL PROTECTED]> wrote:
Hi,
just encountered this error trying to dump my db:
any ideas?
Thank you
pg_dump: ERROR: could not access status of transaction 1629514106
DETAIL: could not open file "pg_clog/061
Hi,
just encountered this error trying to dump my db:
any ideas?
Thank you
pg_dump: ERROR: could not access status of transaction 1629514106
DETAIL: could not open file "pg_clog/0612": No such file or directory
pg_dump: SQL command to dump the contents of table "annuncio400" failed:
PQendc
On the other hand, dumping a newer version database with an older
version of *pg_dump* is much more likely to succeed. It's not a
guarantee, but it should get you pretty close. And as someone else
mentioned, doing a plain text dump is probably your best bet in this case.
No, that's not an op
Thank you for this infos, I wasn't sure about this, now, at least, I know
that this is not possible.
On the other hand, even the most experienced database programmer /
administrator when upgrading a production environment can't be 100% sure
about whether the application,will fail or not.
Of cours
Hi to all,
We have a pretty big database that is going for an upgrade (PG7 -> PG8)
in the next few days, we have tested all the features of our application
but we cannot be sure that everything will work out perfectly (db is
managing several blob's only tables that have proven to be pretty har
Hello,
we are running a 7.3 postgres db with only a big table (avg
500.000records) and 7 indexes for a search engine.
we have 2 of this databases and we can switch from one to another.
Last week we decided to give a try to 8.1 on one of them and everything
went fine, db is faster (about 2 or 3
re is something helpful.
The only useful post I've found is this one:
http://archives.postgresql.org/pgsql-jdbc/2001-11/msg00306.php
Thank you
Edoardo Ceccarelli
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an a
This is the query:
select max(KA) from annuncio
field KA is indexed and is int4,
explaining gives:
explain select max(KA) from annuncio;
QUERY PLAN
---
Aggregate (cost=21173.70..21173.70 rows=1 width=4)
-> Seq Scan on annuncio (cos
do you mean that, declaring an index serial, I'd never have to deal with
incrementing its primary key? good to know!
anyway in this particular situation I don't need such accurate
behaviour: this table is filled up with a lot of data twice per week and
it's used only to answer queries.
I could
I am going to use them as primary key of the table, so I'll surely need
them unique :)
thank you for you help
Edoardo
Dave Cramer ha scritto:
Edoardo,
Are you using them for referential integrity? If so you would be wise to
use sequences instead.
Christopher: yes you are correct, I wasn't sur
I am using the oid of the table as the main key and I've found that is
not indexed (maybe because I have declared another primary key in the table)
it is a good practice to create an index like this on the oid of a table?
CREATE INDEX idoid annuncio400 USING btree (oid);
does it work as a normal
I was thinking
about setting this table in a kind of "read-only" mode to improve
performance, is this possible?
Thank you for your help
Edoardo Ceccarelli
---(end of broadcast)---
TIP 8: explain analyze is your friend
api?
Looked in the examples under src/interfaces/jdbc/ but all blob examples
uses that type of access.
Thank You
--
Edoardo Ceccarelli
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
> > AFTER (actual db)
> > portaportese=# explain analyze SELECT * FROM utente where
> luogorilasciodoc='ciao';
> > QUERY PLAN
> >
> >
> --
> >
> > -
I have a simple query that scans each record, like this: select * from utente where
luogorilasciodoc='ciao'
The execution time BEFORE vacuum is: 1203ms
The execution time AFTER vacuum is: 6656ms !!!
What is going on? Thought that one vaccum the db to get better performance!
PostgreSQL Ver. 7.3.4
Well, maybe it was more, can't remember, it was at 3am! :)
> -Messaggio originale-
> Da: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Per conto di Sam
> Barnett-Cormack
> Inviato: giovedì 5 febbraio 2004 16.00
> A: Edoardo Ceccarelli
> Cc: 'David F. Skol
> But if the Original Poster is encountering that the database
> is doing Seq Scans when it would be better to do an Index
> Scan, that is a separate problem, and focusing on the VACUUM
> may distract from the _real_ problem...
> --
I have only noticed that after a VACUUM ANALYZE of the db the
id F. Skoll [mailto:[EMAIL PROTECTED]
> Inviato: giovedì 5 febbraio 2004 3.17
> A: Edoardo Ceccarelli
> Cc: [EMAIL PROTECTED]
> Oggetto: Re: R: [ADMIN] slow seqscan after vacuum analize
>
> On Thu, 5 Feb 2004, Edoardo Ceccarelli wrote:
>
> > Things are worst only for seqsca
> --
> >
> >
> > Seq Scan on utente (cost=0.00..92174.50 rows=3 width=724) (actual
> > time=705.41..6458.19 rows=15 loops=1)
> >Filter: (luogorilasciodoc = 'ciao'::bpchar) Total
> runti
> -Messaggio originale-
> Da: scott.marlowe [mailto:[EMAIL PROTECTED]
> Inviato: mercoledì 4 febbraio 2004 22.45
> A: Edoardo Ceccarelli
> Cc: [EMAIL PROTECTED]
> Oggetto: Re: [ADMIN] slow seqscan after vacuum analize
>
> On Wed, 4 Feb 2004, Edoardo Ceccarell
I have a simple query that scans each record, like this: select * from utente where
luogorilasciodoc='ciao'
The execution time BEFORE vacuum is: 1203ms The execution time AFTER vacuum is: 6656ms
!!!
What is going on? Thought that one vaccum the db to get better performance!
PostgreSQL Ver. 7.3.4 o
23 matches
Mail list logo