On Thu, Jun 28, 2012 at 6:57 PM, Josh Berkus wrote:
>
> A second obstacle to "opportunistic wraparound vacuum" is that
> wraparound vacuum is not interruptable. If you have to kill it off and
> do something else for a couple hours, it can't pick up where it left
> off; it needs to scan the whole t
Le vendredi 29 juin 2012 04:26:42, Tom Lane a écrit :
> Josh Berkus writes:
> > Well, I think it's "plausible but wrong under at least some common
> > circumstances". In addition to seeking, it ignores FS cache effects
> > (not that I have any idea how to account for these mathematically). It
>
Josh Berkus writes:
> Well, I think it's "plausible but wrong under at least some common
> circumstances". In addition to seeking, it ignores FS cache effects
> (not that I have any idea how to account for these mathematically). It
> also makes the assumption that 3 autovacuum workers running at
* Josh Berkus (j...@agliodbs.com) wrote:
> I don't find Stephen's proposal of goal-based solutions to be practical.
> A goal-based approach makes the assumption that database activity is
> predictable, and IME most databases are anything but.
We're talking about over the entire transaction space,
> I'm not especially sold on your theory that there's some behavior that
> forces such convergence, but it's certainly plausible that there was,
> say, a schema alteration applied to all of those partitions at about the
> same time. In any case, as Robert has been saying, it seems like it
> would
Josh Berkus writes:
> So there are two parts to this problem, each of which needs a different
> solution:
> 1. Databases can inadvertently get to the state where many tables need
> wraparound vacuuming at exactly the same time, especially if they have
> many "cold" data partition tables.
I'm not
On Thu, Jun 28, 2012 at 3:03 PM, Josh Berkus wrote:
> 1. Databases can inadvertently get to the state where many tables need
> wraparound vacuuming at exactly the same time, especially if they have
> many "cold" data partition tables.
This suggests that this should be handled rather earlier, and
Excerpts from Josh Berkus's message of jue jun 28 15:03:15 -0400 2012:
> 2) They have large partitioned tables, in which the partitions are
> time-based and do not receive UPDATES after a certain date. Each
> partition was larger than RAM.
I think the solution to this problem has nothing to do
Robert, Tom, Stephen,
So, first, a description of the specific problem I've encountered at two
sites. I'm working on another email suggesting workarounds and
solutions, but that's going to take a bit longer.
Observation
---
This problem occured on two database systems which shared the f
On Thu, Jun 28, 2012 at 9:51 AM, Tom Lane wrote:
> You are inventing problem details to fit
> your solution.
Well, what I'm actually doing is assuming that Josh's customers have
the same problem that our customers do.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Post
Robert Haas writes:
> On Thu, Jun 28, 2012 at 2:02 AM, Tom Lane wrote:
>> Well, that's a fair point, but I don't think it has anything to do with
>> Josh's complaint --- which AFAICT is about imposed load, not about
>> failure to vacuum things that need vacuumed.
> I think it's got everything to
> >> Parallelism is not free, ever, and particularly not here, where it has
> >> the potential to yank the disk head around between five different
> >> files, seeking like crazy, instead of a nice sequential I/O pattern on
> >> each file in turn.
> >
> > Interesting point. Maybe what's going on h
On Thu, Jun 28, 2012 at 2:02 AM, Tom Lane wrote:
> Robert Haas writes:
>> It's just ridiculous to assert that it doesn't matter if all the
>> anti-wraparound vacuums start simultaneously. It does matter. For
>> one thing, once every single autovacuum worker is pinned down doing an
>> anti-wrapa
On Wed, Jun 27, 2012 at 7:00 PM, Josh Berkus wrote:
> I've seen this at two sites now, and my conclusion is that a single
> autovacuum_max_workers isn't sufficient if to cover the case of
> wraparound vacuum. Nor can we just single-thread the wraparound vacuum
> (i.e. just one worker) since that
Robert Haas writes:
> It's just ridiculous to assert that it doesn't matter if all the
> anti-wraparound vacuums start simultaneously. It does matter. For
> one thing, once every single autovacuum worker is pinned down doing an
> anti-wraparound vacuum of some table, then a table that needs an
>
On Thu, Jun 28, 2012 at 12:51 AM, Tom Lane wrote:
> Robert Haas writes:
>> For example, suppose that 26 tables each of which is 4GB in size are
>> going to simultaneously come due for an anti-wraparound vacuum in 26
>> hours. For the sake of simplicity suppose that each will take 1 hour
>> to va
Robert Haas writes:
> For example, suppose that 26 tables each of which is 4GB in size are
> going to simultaneously come due for an anti-wraparound vacuum in 26
> hours. For the sake of simplicity suppose that each will take 1 hour
> to vacuum. What we currently do is wait for 26 hours and then
On Wed, Jun 27, 2012 at 11:38 PM, Tom Lane wrote:
> Stephen Frost writes:
>> * Josh Berkus (j...@agliodbs.com) wrote:
>>> Yeah, I can't believe I'm calling for *yet another* configuration
>>> variable either. Suggested workaround fixes very welcome.
>
>> As I suggested on IRC, my thought would b
Stephen Frost writes:
> * Josh Berkus (j...@agliodbs.com) wrote:
>> Yeah, I can't believe I'm calling for *yet another* configuration
>> variable either. Suggested workaround fixes very welcome.
> As I suggested on IRC, my thought would be to have a goal-based system
> for autovacuum which is si
Josh Berkus writes:
>> I think what you've really got here is inappropriate autovacuum cost
>> delay settings, and/or the logic in autovacuum.c to try to divvy up the
>> available I/O capacity by tweaking workers' delay settings isn't working
>> very well. It's hard to propose improvements withou
Josh, all,
* Josh Berkus (j...@agliodbs.com) wrote:
> Yeah, I can't believe I'm calling for *yet another* configuration
> variable either. Suggested workaround fixes very welcome.
As I suggested on IRC, my thought would be to have a goal-based system
for autovacuum which is similar to our goal-b
> I think what you've really got here is inappropriate autovacuum cost
> delay settings, and/or the logic in autovacuum.c to try to divvy up the
> available I/O capacity by tweaking workers' delay settings isn't working
> very well. It's hard to propose improvements without a lot more detail
> th
On Jun 27, 2012, at 22:00, Josh Berkus wrote:
> Folks,
>
> Yeah, I can't believe I'm calling for *yet another* configuration
> variable either. Suggested workaround fixes very welcome.
>
> The basic issue is that autovacuum_max_workers is set by most users
> based on autovac's fairly lightweig
Josh Berkus writes:
> Yeah, I can't believe I'm calling for *yet another* configuration
> variable either. Suggested workaround fixes very welcome.
> The basic issue is that autovacuum_max_workers is set by most users
> based on autovac's fairly lightweight action most of the time: analyze,
> va
Folks,
Yeah, I can't believe I'm calling for *yet another* configuration
variable either. Suggested workaround fixes very welcome.
The basic issue is that autovacuum_max_workers is set by most users
based on autovac's fairly lightweight action most of the time: analyze,
vacuuming pages not on th
25 matches
Mail list logo