I've got a database where I've lost the WAL files. Since PG
won't start with
bad WAL files, is there anyway I can get the data out of the
actual database
files?
contrib/pg_resetxlog. Please read README.
Could you tell us how did you lose WAL files?
Vadim
we need to control database changes within BEFORE triggers.
There is no problem with triggers called by update, but there is
a problem with triggers called by insert.
We strongly need to know the oid of a newly inserted tuple.
In this case, we use tg_newtuple of the TriggerData structure
Probably we could
optimize this somehow, but allocation of new page in bufmgr is
horrible and that's why we have locks in hio.c from the beginning.
See later message about eliminating lseeks --- I think we should be
able to avoid doing this lock for every single tuple, as it does now,
The filesystem with pg_xlog filled up, and the
backend (all backends) died abnormally. I can't
restart postmaster, either.
Is it OK to delete the files from pg_xlog? What
will be the result?
It's not Ok. Though you could remove files numbered
from 000 to 00012 (in
Is it OK to delete the files from pg_xlog? What will be
the result?
It's not Ok. Though you could remove files numbered from
00 to 00012 (in hex), if any.
OK, thanks. Is there any documentation on these files, and what
our options are if something like this
However the result was (running on 64bit Sol 7, E250/1Gig
RAM) that Oracle was roughly twice as fast. Is this real
or might there be stuff I could do, to improve postgres
performance?
PostgreSQL version?
7.0.2
Try 7.1 or use -F.
You didn't use -F, did you?
What's -F ?
However the result was (running on 64bit Sol 7, E250/1Gig
RAM) that Oracle was roughly twice as fast. Is this real
or might there be stuff I could do, to improve postgres performance?
PostgreSQL version?
You didn't use -F, did you?
Vadim
try to make the commit every 500 or 1000 inserts. It seems to me, that
PG slow down if there are lots of MBs waiting for commit ...
MB == Memory Buffers?
If so then it shouldn't be the case for 7.0.X with Tom' work in bufmgr area.
Vadim
The best way to solve this, would be to remove the feature of keeping
deleted/updated records in the databasefiles and therefor no
need to vacuum.
Is there any way to configure this when compiling? Or are there other
possibilities?
^
There will be in, hopefully, 7.2, only -:(
Added to TODO:
* Move OID retrieval into shared memory to prevent lose of
unused oids
Already implemented. But - up to 8192 oids may be lost in the event
of crash (ie without normal database shutdown when last fetched oid
is logged to WAL).
Also, currently the oid can _not_ be used to
We have talked about this, but we have not seen enought
corruption cases to warrant it.
Huh?! Just pg_ctl -m immediate stop in = 7.0.X with high
insert activity and ... pray.
It's changed in 7.1 by WAL - btree doesn't lose tuples in split
ops anymore and in after crash restart you'll never
I think that under 7.1, pg_log is not so critical anymore, but I'm not
sure. Vadim, any comment?
Still critical till we implement UNDO and true changes rollback on
transaction abort.
Vadim
That is what transactions are for. If any errors occur, then the
transacction is aborted. You are supposed to use
transactions when you want either everything to occur
(the whole transaction), or nothing, if an error occurs.
Yes. There are certainly times when a transaction
unique (a, b, c)
);
insert into letter values('1','2','3');
insert into letter values('1','2','3');
insert into letter (a,c) values ('1','3');
14 matches
Mail list logo