Hi,
On Tue, 2012-09-25 at 20:42 -0400, Robert Haas wrote:
> It seems therefore that REINDEX + VACUUM with
> vacuum_freeze_table_age=0 is not quite sufficient to recover from this
> problem. If your index has come to contain a circularity, vacuum will
> fail to terminate, and you'll need to drop
On 19 September 2012 18:47, Tom Lane wrote:
> BTW, what should our advice be for recovering from corruption due to
> this bug? As far as the btree and GIN problems go, we can tell people
> that REINDEX will fix it. And in 9.1, you don't really need to worry
> about the visibility map being bad.
BTW, what should our advice be for recovering from corruption due to
this bug? As far as the btree and GIN problems go, we can tell people
that REINDEX will fix it. And in 9.1, you don't really need to worry
about the visibility map being bad. But what do you do in 9.2, if
you have a bad visibil
Andres Freund writes:
> Btw, I played with this some more on Saturday and I think, while definitely a
> bad bug, the actual consequences aren't as bad as at least I initially feared.
> Fake relcache entries are currently set in 3 scenarios during recovery:
> 1. removal of ALL_VISIBLE in heapam.c
On 17 September 2012 07:44, Andres Freund wrote:
> So I think while that bug had the possibility of being really bad we were
> pretty lucky...
Yes, agreed. The impact is not as severe as I originally thought.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development
On Monday, September 17, 2012 07:35:06 AM Tom Lane wrote:
> Robert Haas writes:
> > On Sep 15, 2012, at 11:32 PM, Tom Lane wrote:
> >> Right, but we do a shutdown checkpoint at the end of crash recovery.
(as noted somewhere else and tackled by Simon, a END_OF_RECOVERY didn't sync
those before)
Robert Haas writes:
> On Sep 15, 2012, at 11:32 PM, Tom Lane wrote:
>> Right, but we do a shutdown checkpoint at the end of crash recovery.
> Yes, but that only writes the buffers that are dirty. It doesn't fix the lack
> of a BM_PERMANENT flag on a buffer that ought to have had one. So that pa
On Sep 15, 2012, at 11:32 PM, Tom Lane wrote:
> Right, but we do a shutdown checkpoint at the end of crash recovery.
Yes, but that only writes the buffers that are dirty. It doesn't fix the lack
of a BM_PERMANENT flag on a buffer that ought to have had one. So that page can
now get modified AGA
Robert Haas writes:
> On Sep 15, 2012, at 11:29 AM, Tom Lane wrote:
>> This is only an issue on standby slaves or when doing a PITR recovery, no?
>> As far as I can tell from the discussion, it would *not* affect crash
>> recovery, because we don't do restartpoints during crash recovery.
> No, I
On Sep 15, 2012, at 11:29 AM, Tom Lane wrote:
> Robert Haas writes:
>> Definitions aside, I think it's a pretty scary issue. It basically means
>> that if you have a recovery (crash or archive) during which you read a
>> buffer into memory, the buffer won't be checkpointed. So if, before the
Robert Haas writes:
> Definitions aside, I think it's a pretty scary issue. It basically means that
> if you have a recovery (crash or archive) during which you read a buffer into
> memory, the buffer won't be checkpointed. So if, before the buffer is next
> evicted, you have a crash, and if a
On Sep 14, 2012, at 12:17 PM, Simon Riggs wrote:
> The bug itself is not major, but the extent and user impact is serious.
I don't think I understand how you're using the word major there. I seem to
recall some previous disputation between you and I about the use of that term,
so maybe it woul
On 14 September 2012 17:28, Tom Lane wrote:
> "Devrim GUNDUZ" writes:
>> Does this mean we need to wrap new tarballs soon, because of the data loss
>> bug?
>
> I'll defer to Robert on whether this bug is serious enough to merit a
> near-term release on its own. But historically, we've wanted to
"Devrim GUNDUZ" writes:
> Does this mean we need to wrap new tarballs soon, because of the data loss
> bug?
I'll defer to Robert on whether this bug is serious enough to merit a
near-term release on its own. But historically, we've wanted to push
out a .1 update two or three weeks after a .0 rel
Hi,
Does this mean we need to wrap new tarballs soon, because of the data loss
bug?
Regards,
Devrim
14 Eylül 2012, Cuma, 4:42 pm tarihinde, Robert Haas yazmış:
> Properly set relpersistence for fake relcache entries.
>
> This can result in buffers failing to be properly flushed at
> checkpoint
15 matches
Mail list logo