Kevin Grittner wrote:
> Applied with trivial editing, mostly from a pgindent run against
> modified files.
Applied back as far as 9.0. Before that code didn't match well
enough for it to seem safe to apply without many hours of
additional testing.
I have confirmed occurences of this problem at l
Jan Wieck wrote:
> Cleaned up all of those.
Applied with trivial editing, mostly from a pgindent run against
modified files.
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 12/9/2012 2:37 PM, Kevin Grittner wrote:
Jan Wieck wrote:
Based on the discussion and what I feel is a consensus I have
created an updated patch that has no GUC at all. The hard coded
parameters in include/postmaster/autovacuum.h are
AUTOVACUUM_TRUNCATE_LOCK_CHECK_INTERVAL 20 /* ms */
AUT
Jan Wieck wrote:
> Based on the discussion and what I feel is a consensus I have
> created an updated patch that has no GUC at all. The hard coded
> parameters in include/postmaster/autovacuum.h are
>
> AUTOVACUUM_TRUNCATE_LOCK_CHECK_INTERVAL 20 /* ms */
> AUTOVACUUM_TRUNCATE_LOCK_WAIT_INTERVAL
On 12/6/2012 12:45 PM, Robert Haas wrote:
On Wed, Dec 5, 2012 at 10:16 PM, Jan Wieck wrote:
That sort of "dynamic" approach would indeed be interesting. But I fear that
it is going to be complex at best. The amount of time spent in scanning
heavily depends on the visibility map. The initial vac
Kevin and Robert are well aware of most of the below. I just want to put
this out here so other people, who haven't followed the discussion too
closely, may chime in.
Some details on the problem:
First of all, there is a minimum number of 1000 pages that the vacuum
scan must detect as possibl
On Wed, Dec 5, 2012 at 10:16 PM, Jan Wieck wrote:
> On 12/5/2012 2:00 PM, Robert Haas wrote:
>>
>> Many it'd be sensible to relate the retry time to the time spend
>> vacuuming the table. Say, if the amount of time spent retrying
>> exceeds 10% of the time spend vacuuming the table, with a minimu
On 12/5/2012 2:00 PM, Robert Haas wrote:
Many it'd be sensible to relate the retry time to the time spend
vacuuming the table. Say, if the amount of time spent retrying
exceeds 10% of the time spend vacuuming the table, with a minimum of
1s and a maximum of 1min, give up. That way, big tables w
On Wed, Dec 5, 2012 at 11:24 AM, Kevin Grittner wrote:
> Robert Haas wrote:
>> Since people *already* raise deadlock_timeout to obscenely high
>> values (a minute? an hour???) and then complain that things blow
>> up in their face, I think there's a decent argument to be made
>> that piggybacking
Robert Haas wrote:
> Since people *already* raise deadlock_timeout to obscenely high
> values (a minute? an hour???) and then complain that things blow
> up in their face, I think there's a decent argument to be made
> that piggybacking anything else on that setting is unwise.
If people are reall
On Tue, Dec 4, 2012 at 2:05 PM, Jan Wieck wrote:
>> So the question on the table is which of these three intervals
>> should be GUCs, and what values to use if they aren't.
>
> I could live with all the above defaults, but would like to see more
> comments on them.
I largely agree with what's alr
On 12/4/2012 1:51 PM, Kevin Grittner wrote:
Jan Wieck wrote:
[arguments for GUCs]
This is getting confusing. I thought I had already conceded the
case for autovacuum_truncate_lock_try, and you appeared to spend
most of your post arguing for it anyway. I think. It's a little
hard to tell. Perh
Jan Wieck wrote:
> [arguments for GUCs]
This is getting confusing. I thought I had already conceded the
case for autovacuum_truncate_lock_try, and you appeared to spend
most of your post arguing for it anyway. I think. It's a little
hard to tell. Perhaps the best thing is to present the issue to
On 12/4/2012 8:06 AM, Kevin Grittner wrote:
Jan Wieck wrote:
I believe the check interval needs to be decoupled from the
deadlock_timeout again.
OK
This will leave us with 2 GUCs at least.
Hmm. What problems do you see with hard-coding reasonable values?
The question is what is reasonabl
Jan Wieck wrote:
> Thinking about it, I'm not really happy with removing the
> autovacuum_truncate_lock_check GUC at all.
>
> Fact is that the deadlock detection code and the configuration
> parameter for it should IMHO have nothing to do with all this in
> the first place. A properly implemente
On 12/3/2012 5:42 PM, Kevin Grittner wrote:
Jan Wieck wrote:
Attached is a new patch that addresses most of the points raised
in discussion before.
1) Most of the configuration variables are derived from
deadlock_timeout now. The "check for conflicting lock request"
interval is deadlock_timeou
Jan Wieck wrote:
> Attached is a new patch that addresses most of the points raised
> in discussion before.
>
> 1) Most of the configuration variables are derived from
> deadlock_timeout now. The "check for conflicting lock request"
> interval is deadlock_timeout/10, clamped to 10ms. The "try to
Attached is a new patch that addresses most of the points raised in
discussion before.
1) Most of the configuration variables are derived from deadlock_timeout
now. The "check for conflicting lock request" interval is
deadlock_timeout/10, clamped to 10ms. The "try to acquire exclusive
lock" i
On 11/29/2012 9:46 AM, Tom Lane wrote:
Jan Wieck writes:
On 11/28/2012 3:33 PM, Kevin Grittner wrote:
Resetting starttime this way seems especially odd.
instr_time is LARGE_INTEGER on Win32 but struct timeval on Unix. Is
starttime = currenttime;
portable if those are structs?
Sure.
Jan Wieck writes:
> On 11/28/2012 3:33 PM, Kevin Grittner wrote:
>> Resetting starttime this way seems especially odd.
> instr_time is LARGE_INTEGER on Win32 but struct timeval on Unix. Is
> starttime = currenttime;
> portable if those are structs?
Sure. We rely on struct assignment in lot
On 11/28/2012 3:33 PM, Kevin Grittner wrote:
Kevin Grittner wrote:
I still need to review the timing calls, since I'm not familiar
with them so it wasn't immediately obvious to me whether they
were being used correctly. I have no reason to believe that they
aren't, but feel I should check.
It
Kevin Grittner wrote:
> I still need to review the timing calls, since I'm not familiar
> with them so it wasn't immediately obvious to me whether they
> were being used correctly. I have no reason to believe that they
> aren't, but feel I should check.
It seems odd to me that assignment of one i
Alvaro Herrera wrote:
> Are you posting an updated patch?
Well, there wasn't exactly a consensus on what should change, so I'll
throw some initial review comments out even though I still need to
check some things in the code and do more testing.
The patch applied cleanly, compiled without warnin
Jan,
Are you posting an updated patch?
--
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/m
On Friday, November 16, 2012 4:09 AM Alvaro Herrera wrote:
> Dimitri Fontaine wrote:
> > Jan Wieck writes:
> > > Use this lmgr feature inside count_nondeletable_pages() of
> vacuumlazy.c to
> > > periodically check, if there is a conflicting lock request waiting.
> If not,
> > > keep going. If the
Dimitri Fontaine wrote:
> Jan Wieck writes:
> > Use this lmgr feature inside count_nondeletable_pages() of vacuumlazy.c to
> > periodically check, if there is a conflicting lock request waiting. If not,
> > keep going. If there is a waiter, truncate the relation to the point checked
> > thus far,
Jan Wieck writes:
> Use this lmgr feature inside count_nondeletable_pages() of vacuumlazy.c to
> periodically check, if there is a conflicting lock request waiting. If not,
> keep going. If there is a waiter, truncate the relation to the point checked
> thus far, release the AccessExclusiveLock, t
On Wed, Oct 24, 2012 at 4:20 PM, Jan Wieck wrote:
> This patch does introduce three new postgresql.conf parameters, which I
> would be happy to get rid of if we could derive them from something else.
> Something based on the deadlock timeout may be possible.
>
> autovacuum_truncate_lock_check
On 10/26/2012 6:35 AM, Amit Kapila wrote:
On Friday, October 26, 2012 11:50 AM Jan Wieck wrote:
On 10/26/2012 1:29 AM, Amit Kapila wrote:
>One other way could be to check after every few pages for a
conflicting
> lock request.
How is this any different from what my patch does?
The diffe
On Friday, October 26, 2012 11:50 AM Jan Wieck wrote:
> On 10/26/2012 1:29 AM, Amit Kapila wrote:
> > On Thursday, October 25, 2012 9:46 PM Jan Wieck wrote:
> >> On 10/25/2012 10:12 AM, Stephen Frost wrote:
> >> > Jan,
> >> >
> >> > * Jan Wieck (janwi...@yahoo.com) wrote:
> >> >> The problem case t
On Friday, October 26, 2012 10:59 AM Amit Kapila wrote:
> On Thursday, October 25, 2012 9:46 PM Jan Wieck wrote:
> > On 10/25/2012 10:12 AM, Stephen Frost wrote:
> > > Jan,
> > >
> > > * Jan Wieck (janwi...@yahoo.com) wrote:
> > >> The problem case this patch is dealing with is rolling window
> tab
On 10/26/2012 1:29 AM, Amit Kapila wrote:
On Thursday, October 25, 2012 9:46 PM Jan Wieck wrote:
On 10/25/2012 10:12 AM, Stephen Frost wrote:
> Jan,
>
> * Jan Wieck (janwi...@yahoo.com) wrote:
>> The problem case this patch is dealing with is rolling window tables
>> that experienced some bloat.
On Thursday, October 25, 2012 9:46 PM Jan Wieck wrote:
> On 10/25/2012 10:12 AM, Stephen Frost wrote:
> > Jan,
> >
> > * Jan Wieck (janwi...@yahoo.com) wrote:
> >> The problem case this patch is dealing with is rolling window tables
> >> that experienced some bloat. The typical example is a log tab
On 10/25/2012 12:24 PM, Alvaro Herrera wrote:
Jan Wieck wrote:
In the original code, the maximum delay that autovacuum can cause by
holding the exclusive lock is one deadlock_timeout (default 1s). It
would appear reasonable to me to use max(deadlock_timeout/10,10ms)
as the interval to check for
Jan Wieck wrote:
> In the original code, the maximum delay that autovacuum can cause by
> holding the exclusive lock is one deadlock_timeout (default 1s). It
> would appear reasonable to me to use max(deadlock_timeout/10,10ms)
> as the interval to check for a conflicting lock request. For another
On 10/25/2012 10:12 AM, Stephen Frost wrote:
Jan,
* Jan Wieck (janwi...@yahoo.com) wrote:
The problem case this patch is dealing with is rolling window tables
that experienced some bloat. The typical example is a log table,
that has new data constantly added and the oldest data constantly
purge
Jan,
* Jan Wieck (janwi...@yahoo.com) wrote:
> The problem case this patch is dealing with is rolling window tables
> that experienced some bloat. The typical example is a log table,
> that has new data constantly added and the oldest data constantly
> purged out. This data normally rotates throug
On 10/25/2012 9:45 AM, Tom Lane wrote:
Jan Wieck writes:
On 10/24/2012 10:46 PM, Stephen Frost wrote:
Would it be possible to use the FSM to figure out if things have changed
since the last scan..? Does that scan update the FSM, which would then
be updated by another backend in the event that
Jan Wieck writes:
> On 10/24/2012 10:46 PM, Stephen Frost wrote:
>> Would it be possible to use the FSM to figure out if things have changed
>> since the last scan..? Does that scan update the FSM, which would then
>> be updated by another backend in the event that it decided to write
>> somethin
Steven,
On 10/24/2012 10:46 PM, Stephen Frost wrote:
Jan,
* Jan Wieck (janwi...@yahoo.com) wrote:
This problem has been discussed before. Those familiar with the
subject please skip the next paragraph.
Apologies if this was already thought-of and ruled out for some reason,
but...
Because a
Jan,
* Jan Wieck (janwi...@yahoo.com) wrote:
> This problem has been discussed before. Those familiar with the
> subject please skip the next paragraph.
Apologies if this was already thought-of and ruled out for some reason,
but...
> Because all the scanning had been done in parallel to normal D
Here is the patch for it.
Jan
--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index c9253a9..9f880f0 100644
*** a/src/backend/commands/vacuumlazy.c
--- b/src/
This problem has been discussed before. Those familiar with the subject
please skip the next paragraph.
When autovacuum finds a substantial amount of empty pages at the end of
a relation, it attempts to truncate it in lazy_truncate_heap(). Because
all the scanning had been done in parallel to
43 matches
Mail list logo