Looks good to me. Log file numbering scheme seems to have changed - is
that part of the fix too?.
Tom Lane wrote:
This is done in CVS tip. Mark, could you retest to verify it's fixed?
regards, tom lane
---(end of broadcast)---
TIP 1:
Mark Kirkwood [EMAIL PROTECTED] writes:
Looks good to me. Log file numbering scheme seems to have changed - is
that part of the fix too?.
That's for timelines ... it's not directly related but I thought I
should put in both changes at once to avoid forcing an extra initdb.
Simon Riggs [EMAIL PROTECTED] writes:
On Tue, 2004-07-20 at 15:00, Tom Lane wrote:
Yeah, but the WASTED_SPACE/FILE_HEADER stuff is already pretty ugly, and
adding two more warts to the code to support it is sticking in my craw.
I'm thinking it would be cleaner to treat the extra labeling
On Tue, 2004-07-20 at 05:14, Tom Lane wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
I have been doing some re-testing with CVS HEAD from about 1 hour ago
using the simplified example posted previously.
It is quite interesting:
The problem seems to be that the computation of
Great that it's not fundamental - and hopefully with this discovery, the
probability you mentioned is being squashed towards zero a bit more :-)
Don't let this early bug detract from what is really a superb piece of work!
regards
Mark
Tom Lane wrote:
In any case this isn't a fundamental bug,
On Tue, 2004-07-20 at 05:14, Tom Lane wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
I have been doing some re-testing with CVS HEAD from about 1 hour ago
using the simplified example posted previously.
It is quite interesting:
The problem seems to be that the computation of
Simon Riggs [EMAIL PROTECTED] writes:
The quick and dirty solution would be to dike out the safety check at
4268ff.
If you take out that check, we still fail because the wasted space at
the end is causing a record with zero length error.
Ugh. I'm beginning to think we ought to revert the
On Tue, 2004-07-20 at 13:51, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
The quick and dirty solution would be to dike out the safety check at
4268ff.
If you take out that check, we still fail because the wasted space at
the end is causing a record with zero length error.
Simon Riggs [EMAIL PROTECTED] writes:
On Tue, 2004-07-20 at 13:51, Tom Lane wrote:
Ugh. I'm beginning to think we ought to revert the patch that added the
don't-split-across-files logic to XLogInsert; that seems to have broken
more assumptions than I realized.
The problem was that a zero
On Tue, 2004-07-20 at 15:00, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Tue, 2004-07-20 at 13:51, Tom Lane wrote:
Ugh. I'm beginning to think we ought to revert the patch that added the
don't-split-across-files logic to XLogInsert; that seems to have broken
more assumptions
On Tue, 2004-07-20 at 14:11, Simon Riggs wrote:
On Tue, 2004-07-20 at 13:51, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
The quick and dirty solution would be to dike out the safety check at
4268ff.
If you take out that check, we still fail because the wasted space at
FYI - I can confirm that the patch fixes main issue.
Simon Riggs wrote:
This was a very confusing test...Here's what I think happened:
.
The included patch doesn't attempt to address those issues, yet.
Best regards, Simon Riggs
---(end of
This is presumably a standard feature of any PITR design - if the
failure event destroys the current transaction log, then you can only
recover transactions that committed in the last *archived* log.
regards
Mark
Simon Riggs wrote:
The test works, but gives what looks like strange results: the
I have been doing some re-testing with CVS HEAD from about 1 hour ago
using the simplified example posted previously.
It is quite interesting:
i) create the table as:
CREATE TABLE test0 (filler TEXT);
and COPY 100 000 rows on length 109, then recovery succeeds.
ii) create the table as:
CREATE
Mark Kirkwood [EMAIL PROTECTED] writes:
I have been doing some re-testing with CVS HEAD from about 1 hour ago
using the simplified example posted previously.
It is quite interesting:
The problem seems to be that the computation of checkPoint.redo at
xlog.c lines 4162-4169 (all line numbers
Mark Kirkwood [EMAIL PROTECTED] writes:
- anything =128393 rows in test0.dat results in 2 or more archived
logs, and recovery fails on the second log (and gives the zero length
redo at 0/1E0 message).
Zero length record is not an error, it's the normal way of detecting
end-of-log.
There are some silly bugs in the script:
- forgot to export PGDATA and PATH after changing them
- forgot to mention the need to edit test.sql (COPY line needs path to
dump file)
Apologies - I will submit a fixed version a little later
regards
Mark
Mark Kirkwood wrote:
A script to run the whole
fixed.
Mark Kirkwood wrote:
There are some silly bugs in the script:
- forgot to export PGDATA and PATH after changing them
- forgot to mention the need to edit test.sql (COPY line needs path to
dump file)
Apologies - I will submit a fixed version a little later
regards
Mark
Mark Kirkwood wrote:
I decided to produce a nice simple example, so that anyone could
hopefully replicate what I am seeing.
The scenario is the same as before (the 11 steps), but the CREATE TABLE
and COPY step has been reduced to:
CREATE TABLE test0 (filler VARCHAR(120));
COPY test0 FROM '/data0/dump/test0.dat'
19 matches
Mail list logo