On Fri, Jan 6, 2012 at 12:45 PM, Robert Haas robertmh...@gmail.com wrote:
Oh, that's brilliant. OK, I'll go try that.
All right, that test does in fact reveal the broken-ness of the
current code, and the patch I committed upthread does seem to fix it,
so I've committed that.
After further
On Fri, Jan 6, 2012 at 12:37 AM, Tom Lane t...@sss.pgh.pa.us wrote:
We could fix the direct symptom by inserting UnlockReleaseBuffer()
in front of the continue, but AFAICS that just makes this into a
busy-waiting equivalent of waiting unconditionally, so I don't really
see why we should not
On Thu, Jan 5, 2012 at 7:37 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I suppose Robert had something more intelligent in mind than a tight
loop when the buffer can't be exclusively locked, so maybe there is
some other change that should be made here instead.
My intention was to skip the tuple,
On Fri, Jan 6, 2012 at 2:29 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 7:37 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I suppose Robert had something more intelligent in mind than a tight
loop when the buffer can't be exclusively locked, so maybe there is
some other change
On Fri, Jan 6, 2012 at 9:53 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 6, 2012 at 2:29 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 7:37 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I suppose Robert had something more intelligent in mind than a tight
loop when
On Fri, Jan 6, 2012 at 3:28 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jan 6, 2012 at 9:53 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 6, 2012 at 2:29 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 7:37 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I
On Fri, Jan 6, 2012 at 10:51 AM, Simon Riggs si...@2ndquadrant.com wrote:
I *have* explained what is wrong. It leaves unused tuples in the heap
that would previously have been removed.
More simply: lazy_vacuum_page() does some work and we can't skip that
work just because we don't get the
I started to wonder how likely it would be that some other process would
sit on a buffer pin for so long as to allow 134217727 iterations of
ReadBufferExtended, even given the slowdowns associated with
CLOBBER_CACHE_ALWAYS. This led to some fruitless searching for possible
deadlock conditions,
On Fri, Jan 6, 2012 at 12:24 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I started to wonder how likely it would be that some other process would
sit on a buffer pin for so long as to allow 134217727 iterations of
ReadBufferExtended, even given the slowdowns associated with
CLOBBER_CACHE_ALWAYS.
Robert Haas robertmh...@gmail.com writes:
On Fri, Jan 6, 2012 at 12:24 PM, Tom Lane t...@sss.pgh.pa.us wrote:
So at this point I've got serious doubts as to the quality of testing of
that whole patch, not just this part.
I tested the case where we skip a block during the first pass, but I
On Fri, Jan 6, 2012 at 12:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Fri, Jan 6, 2012 at 12:24 PM, Tom Lane t...@sss.pgh.pa.us wrote:
So at this point I've got serious doubts as to the quality of testing of
that whole patch, not just this part.
I
Lately I have noticed buildfarm member jaguar occasionally failing
regression tests with
ERROR: invalid memory alloc request size 1073741824
during a VACUUM, as for example at
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jaguardt=2012-01-04%2023%3A05%3A02
Naturally I supposed it to be
12 matches
Mail list logo