On Mon, Jul 24, 2017 at 10:50 AM, Joshua D. Drake
wrote:
> Does this suggest that we don't have a cleanup problem but a fragmentation
> problem (or both at least for the index)? Having an index that is almost
> twice the uncleaned up size isn't that uncommon.
As Tom pointed out up-thread, it's i
On 07/23/2017 12:03 PM, Joshua D. Drake wrote:
As you can see even with aggressive vacuuming, over a period of 36 hours
life gets increasingly miserable.
The largest table is:
postgres=# select
pg_size_pretty(pg_total_relation_size('bmsql_order_line'));
pg_size_pretty
1
Hello,
I changed the test to run for 6 hours at a time regardless of number of
transactions. I also changed the du command to only look at the database
(previously wal logs were included). This is the clearest indication of
the problem I have been able to produce.
Again, this is with 128 cli
On 07/20/2017 08:58 PM, Joshua D. Drake wrote:
On 07/19/2017 07:57 PM, Tom Lane wrote:
Peter Geoghegan writes:
Test 1: 55G/srv/main
TPS:955
Test 2: 112G/srv/main
TPS:531 (Not sure what happened here, long checkpoint?)
Test 3: 109G/srv/main
TPS:868
Test 4: 143G
TPS:
On 07/20/2017 11:54 PM, Sokolov Yura wrote:
On 2017-07-21 06:58, Joshua D. Drake wrote:
On 07/19/2017 07:57 PM, Tom Lane wrote:
Peter Geoghegan writes:
--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
PostgreSQL Centered full stack support, consulting and developm
On 2017-07-21 06:58, Joshua D. Drake wrote:
On 07/19/2017 07:57 PM, Tom Lane wrote:
Peter Geoghegan writes:
My argument for the importance of index bloat to the more general
bloat problem is simple: any bloat that accumulates, that cannot be
cleaned up, will probably accumulate until it impact
On 21/07/17 15:58, Joshua D. Drake wrote:
On 07/19/2017 07:57 PM, Tom Lane wrote:
Peter Geoghegan writes:
My argument for the importance of index bloat to the more general
bloat problem is simple: any bloat that accumulates, that cannot be
cleaned up, will probably accumulate until it impacts
On 07/19/2017 07:57 PM, Tom Lane wrote:
Peter Geoghegan writes:
My argument for the importance of index bloat to the more general
bloat problem is simple: any bloat that accumulates, that cannot be
cleaned up, will probably accumulate until it impacts performance
quite noticeably.
But that ju
On Thu, Jul 20, 2017 at 7:45 AM, Claudio Freire wrote:
>> For the purposes of this discussion, I'm mostly talking about
>> duplicates within a page on a unique index. If the keyspace owned by
>> an int4 unique index page only covers 20 distinct values, it will only
>> ever cover 20 distinct values
On Thu, Jul 20, 2017 at 6:28 AM, Stephen Frost wrote:
> Greetings,
>
> * Sokolov Yura (y.soko...@postgrespro.ru) wrote:
> > I wrote two days ago about vacuum ring buffer:
> > https://www.postgresql.org/message-id/8737e9bddb82501da1134f021bf492
> 9a%40postgrespro.ru
> >
> > Increasing Vacuum's rin
On Thu, Jul 20, 2017 at 12:08 AM, Peter Geoghegan wrote:
>> The traditional
>> wisdom about btrees, for instance, is that no matter how full you pack
>> them to start with, the steady state is going to involve something like
>> 1/3rd free space. You can call that bloat if you want, but it's not
>
On 07/20/2017 06:28 AM, Stephen Frost wrote:
It's not clear off-hand how much that would improve this case, as
the database size appears to pretty quickly get beyond the OS memory
size (and only in the first test is the DB starting size less than
system memory to begin with).
FYI,
I will be p
Greetings,
* Sokolov Yura (y.soko...@postgrespro.ru) wrote:
> I wrote two days ago about vacuum ring buffer:
> https://www.postgresql.org/message-id/8737e9bddb82501da1134f021bf4929a%40postgrespro.ru
>
> Increasing Vacuum's ring buffer to size of Bulk Writer's one reduces
> autovacuum time in 3-10
On 2017-07-20 05:52, Masahiko Sawada wrote:
On Thu, Jul 20, 2017 at 8:24 AM, Stephen Frost
wrote:
* Tom Lane (t...@sss.pgh.pa.us) wrote:
"Joshua D. Drake" writes:
> At PGConf US Philly last week I was talking with Jim and Jan about
> performance. One of the items that came up is that PostgreS
On Wed, Jul 19, 2017 at 7:57 PM, Tom Lane wrote:
> Peter Geoghegan writes:
>> My argument for the importance of index bloat to the more general
>> bloat problem is simple: any bloat that accumulates, that cannot be
>> cleaned up, will probably accumulate until it impacts performance
>> quite noti
Peter Geoghegan writes:
> My argument for the importance of index bloat to the more general
> bloat problem is simple: any bloat that accumulates, that cannot be
> cleaned up, will probably accumulate until it impacts performance
> quite noticeably.
But that just begs the question: *does* it accu
On Thu, Jul 20, 2017 at 8:24 AM, Stephen Frost wrote:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> "Joshua D. Drake" writes:
>> > At PGConf US Philly last week I was talking with Jim and Jan about
>> > performance. One of the items that came up is that PostgreSQL can't run
>> > full throttle for l
On Wed, Jul 19, 2017 at 3:54 PM, Alvaro Herrera
wrote:
>> Index bloat is a general problem that B-Trees have in all other major
>> systems, but I think that PostgreSQL has a tendency to allow indexes
>> to become progressively more bloated over time, in a way that it often
>> can never recover fro
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> "Joshua D. Drake" writes:
> > At PGConf US Philly last week I was talking with Jim and Jan about
> > performance. One of the items that came up is that PostgreSQL can't run
> > full throttle for long periods of time. The long and short is that no
> > mat
On 07/19/2017 03:29 PM, Tom Lane wrote:
> "Joshua D. Drake" writes:
>> At PGConf US Philly last week I was talking with Jim and Jan about
>> performance. One of the items that came up is that PostgreSQL can't run
>> full throttle for long periods of time. The long and short is that no
>> matter
Peter Geoghegan wrote:
> Index bloat is a general problem that B-Trees have in all other major
> systems, but I think that PostgreSQL has a tendency to allow indexes
> to become progressively more bloated over time, in a way that it often
> can never recover from [1].
Interesting assertion. Many
"Joshua D. Drake" writes:
> At PGConf US Philly last week I was talking with Jim and Jan about
> performance. One of the items that came up is that PostgreSQL can't run
> full throttle for long periods of time. The long and short is that no
> matter what, autovacuum can't keep up. This is what
On Wed, Jul 19, 2017 at 3:11 PM, Joshua D. Drake wrote:
> The good news is, PostgreSQL is not doing half bad against 128 connections
> with only 16vCPU. The bad news is we more than doubled our disk size without
> getting reuse or bloat under control. The concern here is that under heavy
> write l
Hello,
At PGConf US Philly last week I was talking with Jim and Jan about
performance. One of the items that came up is that PostgreSQL can't run
full throttle for long periods of time. The long and short is that no
matter what, autovacuum can't keep up. This is what I have done:
Machine:
1
On Fri, Jun 23, 2017 at 3:01 AM, Thomas Munro
wrote:
> On Thu, Jun 22, 2017 at 4:29 AM, Kuntal Ghosh
> wrote:
>> On Wed, Jun 21, 2017 at 7:52 PM, Dilip Kumar wrote:
>>> On Wed, Jun 21, 2017 at 7:47 PM, Kuntal Ghosh
>>> wrote:
> IMHO, It's not a good idea to use DSM call to verify the DSA ha
On Thu, 22 Jun 2017 13:55:26 -0400
Alvaro Herrera wrote:
> Yugo Nagata wrote:
> > Hi,
> >
> > As I report in another thread[1], I found the autovacuum launcher occurs
> > the following error in PG 10 when this received SIGINT. I can repuroduce
> > this by pg_cancel_backend or `kill -2 `.
>
> Th
On Fri, Jun 23, 2017 at 7:02 AM, Alvaro Herrera
wrote:
> Thomas Munro wrote:
>> I thought about this when designing the DSA API. I couldn't think of
>> any good reason to provide an 'am-I-already-attached?' function
>> equivalent to dsm_find_mapping. It seemed to me that the client code
>> shoul
Thomas Munro wrote:
> I thought about this when designing the DSA API. I couldn't think of
> any good reason to provide an 'am-I-already-attached?' function
> equivalent to dsm_find_mapping. It seemed to me that the client code
> shouldn't ever be in any doubt about whether it's attached, and th
On Thu, Jun 22, 2017 at 4:29 AM, Kuntal Ghosh
wrote:
> On Wed, Jun 21, 2017 at 7:52 PM, Dilip Kumar wrote:
>> On Wed, Jun 21, 2017 at 7:47 PM, Kuntal Ghosh
>> wrote:
IMHO, It's not a good idea to use DSM call to verify the DSA handle.
>>> Okay. Is there any particular scenario you've i
Yugo Nagata wrote:
> Hi,
>
> As I report in another thread[1], I found the autovacuum launcher occurs
> the following error in PG 10 when this received SIGINT. I can repuroduce
> this by pg_cancel_backend or `kill -2 `.
Thanks for the report, BTW!
--
Álvaro Herrerahttps://www.2n
Thomas Munro wrote:
> Hmm. So the problem here is that AutoVacLauncherMain assumes that
> there are only two possibilities: (1) there is no handle published in
> shmem yet, so we should create a DSA area and publish the handle, and
> (2) there is a handle published in shmem so we should attach to
On Thu, Jun 22, 2017 at 6:10 PM, Michael Paquier
wrote:
> On Thu, Jun 22, 2017 at 2:44 PM, Kuntal Ghosh
> wrote:
>> On Thu, Jun 22, 2017 at 9:48 AM, Michael Paquier
>> wrote:
>>> On Thu, Jun 22, 2017 at 1:29 AM, Kuntal Ghosh
>>> wrote:
But, I've some more doubts.
1. When should we use
On Thu, Jun 22, 2017 at 2:44 PM, Kuntal Ghosh
wrote:
> On Thu, Jun 22, 2017 at 9:48 AM, Michael Paquier
> wrote:
>> On Thu, Jun 22, 2017 at 1:29 AM, Kuntal Ghosh
>> wrote:
>>> But, I've some more doubts.
>>> 1. When should we use dsm_find_mapping()? (The first few lines of
>>> dsm_attach is same
On Thu, Jun 22, 2017 at 9:48 AM, Michael Paquier
wrote:
> On Thu, Jun 22, 2017 at 1:29 AM, Kuntal Ghosh
> wrote:
>> On Wed, Jun 21, 2017 at 7:52 PM, Dilip Kumar wrote:
>>> On Wed, Jun 21, 2017 at 7:47 PM, Kuntal Ghosh
>>> wrote:
> IMHO, It's not a good idea to use DSM call to verify the DSA
On Thu, 22 Jun 2017 13:12:48 +0900
Michael Paquier wrote:
> On Wed, Jun 21, 2017 at 9:15 PM, Yugo Nagata wrote:
> > This errors continue until this process is terminated or the server is
> > restarted.
> >
> > When SIGINT is issued, the process exits from the main loop and returns
> > to sigset
On Thu, Jun 22, 2017 at 1:29 AM, Kuntal Ghosh
wrote:
> On Wed, Jun 21, 2017 at 7:52 PM, Dilip Kumar wrote:
>> On Wed, Jun 21, 2017 at 7:47 PM, Kuntal Ghosh
>> wrote:
IMHO, It's not a good idea to use DSM call to verify the DSA handle.
>>> Okay. Is there any particular scenario you've i
On Wed, Jun 21, 2017 at 9:15 PM, Yugo Nagata wrote:
> This errors continue until this process is terminated or the server is
> restarted.
>
> When SIGINT is issued, the process exits from the main loop and returns
> to sigsetjmp, and calls dsa_attach() before entering into the loop again,
> this
On Wed, Jun 21, 2017 at 7:52 PM, Dilip Kumar wrote:
> On Wed, Jun 21, 2017 at 7:47 PM, Kuntal Ghosh
> wrote:
>>> IMHO, It's not a good idea to use DSM call to verify the DSA handle.
>>>
>> Okay. Is there any particular scenario you've in mind where this may fail?
>
> It's not about failure, but a
On Wed, Jun 21, 2017 at 7:47 PM, Kuntal Ghosh
wrote:
>> IMHO, It's not a good idea to use DSM call to verify the DSA handle.
>>
> Okay. Is there any particular scenario you've in mind where this may fail?
It's not about failure, but about the abstraction. When we are using
the DSA we should not
On Wed, Jun 21, 2017 at 7:07 PM, Dilip Kumar wrote:
> On Wed, Jun 21, 2017 at 6:50 PM, Kuntal Ghosh
> wrote:
>> I think we can just check dsm_find_mapping() to check whether the dsm
>> handle is already attached. Something like,
>>
>> }
>> - else
>> + else if(!dsm_find_mapping
On Wed, Jun 21, 2017 at 6:50 PM, Kuntal Ghosh
wrote:
> I think we can just check dsm_find_mapping() to check whether the dsm
> handle is already attached. Something like,
>
> }
> - else
> + else if(!dsm_find_mapping(AutoVacuumShmem->av_dsa_handle))
> {
>
On Wed, Jun 21, 2017 at 5:45 PM, Yugo Nagata wrote:
> Hi,
>
> As I report in another thread[1], I found the autovacuum launcher occurs
> the following error in PG 10 when this received SIGINT. I can repuroduce
> this by pg_cancel_backend or `kill -2 `.
>
> 2017-06-21 13:56:07.010 JST [32483] ERROR
Hi,
As I report in another thread[1], I found the autovacuum launcher occurs
the following error in PG 10 when this received SIGINT. I can repuroduce
this by pg_cancel_backend or `kill -2 `.
2017-06-21 13:56:07.010 JST [32483] ERROR: canceling statement due to user
request
2017-06-21 13:56:08.0
On Mon, Nov 28, 2016 at 12:18 PM, Tom Lane wrote:
> Robert Haas writes:
>> I don't believe we should be so scared of the possibility of a serious
>> bug that can't be found through any of the ways we normally test that
>> we aren't willing to fix problems we can readily foresee. I grant
>> that
Robert Haas writes:
> I don't believe we should be so scared of the possibility of a serious
> bug that can't be found through any of the ways we normally test that
> we aren't willing to fix problems we can readily foresee. I grant
> that there are some situations where fixing a problem might in
On Sun, Nov 27, 2016 at 10:30 PM, Tom Lane wrote:
> Robert Haas writes:
>> I think you made this considerably more fragile with those changes.
>
> This code will only ever run at all in corner cases --- cases that
> almost by definition will go untested in the standard regression tests.
> The pro
Robert Haas writes:
> I think you made this considerably more fragile with those changes.
This code will only ever run at all in corner cases --- cases that
almost by definition will go untested in the standard regression tests.
The problems you suggest it has are corner-squared or corner-cubed c
On Mon, Nov 28, 2016 at 12:14 PM, Robert Haas wrote:
> I think you made this considerably more fragile with those changes.
> Now, if we fail to drop a temporary table, we won't do any actual
> vacuuming, either. I'd be willing to bet someone will get hosed
> because of that who would have been mu
On Sun, Nov 27, 2016 at 9:33 PM, Tom Lane wrote:
> I pushed a patch to deal with this. I ended up simplifying the previous
> commit considerably by getting rid of the commit-multiple-deletions-per-
> transaction business. I do not think that this code will get exercised
> enough, either in the f
On Mon, Nov 28, 2016 at 11:46 AM, Tom Lane wrote:
> Michael Paquier writes:
>> In order to reproduce the failure I have just inserted a manual
>> pg_usleep before looping through the list of orphan_oids, and after
>> dropping manually from another session a couple of orphaned temporary
>> tables
Michael Paquier writes:
> In order to reproduce the failure I have just inserted a manual
> pg_usleep before looping through the list of orphan_oids, and after
> dropping manually from another session a couple of orphaned temporary
> tables I was able to see the failure. Attached is a proposal of
On Mon, Nov 28, 2016 at 11:33 AM, Tom Lane wrote:
> I pushed a patch to deal with this. I ended up simplifying the previous
> commit considerably by getting rid of the commit-multiple-deletions-per-
> transaction business. I do not think that this code will get exercised
> enough, either in the
On Mon, Nov 28, 2016 at 10:02 AM, Robert Haas wrote:
> On Sun, Nov 27, 2016 at 5:45 PM, Tom Lane wrote:
>> So the problem seems to be confirmed to exist, but be of low probability
>> and low consequences, in back branches. I think we only need to fix it in
>> HEAD. The lock acquisition and stat
Robert Haas writes:
> Thanks for digging into this. I failed to notice while reviewing that
> the way we were printing the message had changed a bit in the new
> code, and I just totally overlooked the existing locking hazards.
> Oops.
I pushed a patch to deal with this. I ended up simplifying
On Sun, Nov 27, 2016 at 5:45 PM, Tom Lane wrote:
> So the problem seems to be confirmed to exist, but be of low probability
> and low consequences, in back branches. I think we only need to fix it in
> HEAD. The lock acquisition and status recheck that I proposed before
> should be sufficient.
I wrote:
> Buildfarm member skink failed a couple days ago:
> http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2016-11-25%2017%3A50%3A01
Ah ... I can reproduce this with moderate reliability (one failure every
10 or so iterations of the regression tests) by inserting a delay just
be
Buildfarm member skink failed a couple days ago:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2016-11-25%2017%3A50%3A01
I believe the interesting parts of the log are
2016-11-25 18:29:03.285 UTC [583882e7.2a45:1] LOG: autovacuum: dropping orphan
temp table "(null)"."(null)" i
On Thu, Oct 6, 2016 at 12:11 AM, Jeff Janes wrote:
> On Wed, Oct 5, 2016 at 7:28 AM, Masahiko Sawada
> wrote:
>>
>> Hi all,
>>
>> I found the kind of strange behaviour of the autovacuum launcher
>> process when XID anti-wraparound vacuum.
>>
>> Suppose that a database (say test_db) whose age of f
On Wed, Oct 5, 2016 at 7:28 AM, Masahiko Sawada
wrote:
> Hi all,
>
> I found the kind of strange behaviour of the autovacuum launcher
> process when XID anti-wraparound vacuum.
>
> Suppose that a database (say test_db) whose age of frozenxid is about
> to reach max_autovacuum_max_age has three ta
Hi all,
I found the kind of strange behaviour of the autovacuum launcher
process when XID anti-wraparound vacuum.
Suppose that a database (say test_db) whose age of frozenxid is about
to reach max_autovacuum_max_age has three tables T1 and T2.
T1 is very large and is frequently updated, so vacuum
Alexander Korotkov writes:
> On Sun, May 22, 2016 at 12:39 PM, Amit Kapila
> wrote:
>> As per your latest patch, you are using ReadNewTransactionId() to get the
>> nextXid which then is used to check if any database's frozenxid is already
>> wrapped. Now, isn't the value of nextXID in your patch
On Sun, May 22, 2016 at 12:39 PM, Amit Kapila
wrote:
> On Mon, Mar 28, 2016 at 4:35 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> Hackers,
>>
>> one our customer meet near xid wraparound situation. xid counter
>> reached xidStopLimit value. So, no transactions could be execu
On Mon, Mar 28, 2016 at 4:35 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> Hackers,
>
> one our customer meet near xid wraparound situation. xid counter
> reached xidStopLimit value. So, no transactions could be executed in
> normal mode. But what I noticed is strange behaviour o
On Mon, Mar 28, 2016 at 2:05 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> After some debugging I found that vac_truncate_clog consumes xid just to
> produce warning. I wrote simple patch which replaces
> GetCurrentTransactionId() with ShmemVariableCache->nextXid. That
> completel
Hackers,
one our customer meet near xid wraparound situation. xid counter
reached xidStopLimit value. So, no transactions could be executed in
normal mode. But what I noticed is strange behaviour of autovacuum to
prevent wraparound. It vacuums tables, updates pg_class and pg_database,
but then
On Mon, Oct 27, 2014 at 5:51 PM, Alvaro Herrera
wrote:
> Jeff Janes wrote:
>> It is only a page read if you have to read the page. It would seem optimal
>> to have bgwriter adventitiously set hint bits and vm bits, because that is
>> the last point at which the page can be changed without risking
Jeff Janes wrote:
> It is only a page read if you have to read the page. It would seem optimal
> to have bgwriter adventitiously set hint bits and vm bits, because that is
> the last point at which the page can be changed without risking that it be
> written out twice. At that point, it has been
On 10/21/14, 4:36 PM, Jeff Janes wrote:
On Mon, Oct 20, 2014 at 5:46 PM, Andres Freund mailto:and...@2ndquadrant.com>> wrote:
On 2014-10-20 17:43:26 -0700, Josh Berkus wrote:
> On 10/20/2014 05:39 PM, Jim Nasby wrote:
> > Or maybe vacuum isn't the right way to handle some of these sc
On Mon, Oct 20, 2014 at 5:46 PM, Andres Freund
wrote:
> On 2014-10-20 17:43:26 -0700, Josh Berkus wrote:
> > On 10/20/2014 05:39 PM, Jim Nasby wrote:
> > > Or maybe vacuum isn't the right way to handle some of these scenarios.
> > > It's become the catch-all for all of this stuff, but maybe that
On 2014-10-20 17:43:26 -0700, Josh Berkus wrote:
> On 10/20/2014 05:39 PM, Jim Nasby wrote:
> > Or maybe vacuum isn't the right way to handle some of these scenarios.
> > It's become the catch-all for all of this stuff, but maybe that doesn't
> > make sense anymore. Certainly when it comes to deali
On 10/20/2014 05:39 PM, Jim Nasby wrote:
> Or maybe vacuum isn't the right way to handle some of these scenarios.
> It's become the catch-all for all of this stuff, but maybe that doesn't
> make sense anymore. Certainly when it comes to dealing with inserts
> there's no reason we *have* to do anyth
On 10/20/14, 3:11 PM, Andres Freund wrote:
On 2014-10-19 20:43:29 -0500, Jim Nasby wrote:
On 10/19/14, 11:41 AM, Andres Freund wrote:
On 2014-10-18 21:36:48 -0500, Jim Nasby wrote:
The "weird" part is that if it's not doing a freeze it will just punt
on a page if it can't get the cleanup lock.
On 2014-10-19 20:43:29 -0500, Jim Nasby wrote:
> On 10/19/14, 11:41 AM, Andres Freund wrote:
> >On 2014-10-18 21:36:48 -0500, Jim Nasby wrote:
> >>The "weird" part is that if it's not doing a freeze it will just punt
> >>on a page if it can't get the cleanup lock.
> >
> >I don't think that's partic
On 10/19/14, 11:41 AM, Andres Freund wrote:
On 2014-10-18 21:36:48 -0500, Jim Nasby wrote:
On 10/9/14, 4:19 PM, Andres Freund wrote:
On 2014-10-09 18:16:46 -0300, Alvaro Herrera wrote:
Andres Freund wrote:
On 2014-10-09 18:03:00 -0300, Alvaro Herrera wrote:
Bruce Momjian wrote:
I agree t
On 2014-10-19 12:50:30 -0400, Tom Lane wrote:
> Andres Freund writes:
> > There's rub here though. We unconditionally do:
> > /* Do post-vacuum cleanup and statistics update for each index */
> > for (i = 0; i < nindexes; i++)
> > lazy_cleanup_index(Irel[i], indstats[i], vacrel
Andres Freund writes:
> There's rub here though. We unconditionally do:
> /* Do post-vacuum cleanup and statistics update for each index */
> for (i = 0; i < nindexes; i++)
> lazy_cleanup_index(Irel[i], indstats[i], vacrelstats);
> and that's not particularly cheap. Mayb
On 2014-10-18 21:36:48 -0500, Jim Nasby wrote:
> On 10/9/14, 4:19 PM, Andres Freund wrote:
> >On 2014-10-09 18:16:46 -0300, Alvaro Herrera wrote:
> >>>Andres Freund wrote:
> >On 2014-10-09 18:03:00 -0300, Alvaro Herrera wrote:
> > > >Bruce Momjian wrote:
> > > >
> >> > > >I agree t
On 10/9/14, 4:19 PM, Andres Freund wrote:
On 2014-10-09 18:16:46 -0300, Alvaro Herrera wrote:
>Andres Freund wrote:
> >On 2014-10-09 18:03:00 -0300, Alvaro Herrera wrote:
> > >Bruce Momjian wrote:
> > >
> > > >I agree this is a serious problem. We have discussed various options,
> > > >but h
On 10/9/14, 4:03 PM, Alvaro Herrera wrote:
Bruce Momjian wrote:
I agree this is a serious problem. We have discussed various options,
but have not decided on anything. The TODO list has:
https://wiki.postgresql.org/wiki/Todo
Improve setting of visibility map bits for read-on
On 2014-10-09 18:16:46 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
> > On 2014-10-09 18:03:00 -0300, Alvaro Herrera wrote:
> > > Bruce Momjian wrote:
> > >
> > > > I agree this is a serious problem. We have discussed various options,
> > > > but have not decided on anything. The TODO lis
Kevin Grittner wrote:
> Wouldn't we get substantially the same thing just by counting tuple
> inserts toward the autovacuum vacuum threshold? I mean, it unless
> the table is due for wraparound prevention autovacuum, it will only
> visit pages that don't have the all-visible bit set, right? And
Andres Freund wrote:
> On 2014-10-09 18:03:00 -0300, Alvaro Herrera wrote:
> > Bruce Momjian wrote:
> >
> > > I agree this is a serious problem. We have discussed various options,
> > > but have not decided on anything. The TODO list has:
> > >
> > > https://wiki.postgresql.org/wiki/Todo
> >
Alvaro Herrera wrote:
> Bruce Momjian wrote:
>> I agree this is a serious problem. We have discussed various options,
>> but have not decided on anything. The TODO list has:
>>
>> https://wiki.postgresql.org/wiki/Todo
>>
>> Improve setting of visibility map bits for read-only and insert
On 2014-10-09 18:03:00 -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
>
> > I agree this is a serious problem. We have discussed various options,
> > but have not decided on anything. The TODO list has:
> >
> > https://wiki.postgresql.org/wiki/Todo
> >
> > Improve setting of visib
Bruce Momjian wrote:
> I agree this is a serious problem. We have discussed various options,
> but have not decided on anything. The TODO list has:
>
> https://wiki.postgresql.org/wiki/Todo
>
> Improve setting of visibility map bits for read-only and insert-only
> workloads
>
On Thu, Oct 9, 2014 at 02:34:17PM +0400, Alexey Bashtanov wrote:
> Hello!
>
> Autovacuum daemon performs vacuum when the number of rows
> updated/deleted (n_dead_tuples) reaches some threshold.
> Similarly it performs analyze when the number of rows changed in any
> way (incl. inserted).
> When a
Hello!
Autovacuum daemon performs vacuum when the number of rows
updated/deleted (n_dead_tuples) reaches some threshold.
Similarly it performs analyze when the number of rows changed in any way
(incl. inserted).
When a table is mostly insert-only, its visibility map is not updated as
vacuum th
On Wed, Oct 1, 2014 at 11:44 AM, Alvaro Herrera
wrote:
> Robert Haas wrote:
>> This kind of seems like throwing darts at the wall. It could be
>> better if we are right to skip the database already being vacuumed for
>> wraparound, or worse if we're not.
>
> Well, it only skips the DB for half th
Robert Haas wrote:
> This kind of seems like throwing darts at the wall. It could be
> better if we are right to skip the database already being vacuumed for
> wraparound, or worse if we're not.
Well, it only skips the DB for half the naptime interval, so that other
databases have a chance to be
On Tue, Sep 30, 2014 at 5:59 PM, Alvaro Herrera
wrote:
> Jeff Janes wrote:
>> > I think that instead of
>> > trying to get a single target database in that foreach loop, we could
>> > try to build a prioritized list (in-wraparound-danger first, then
>> > in-multixid-wraparound danger, then the one
Alvaro Herrera wrote:
> The attached patch implements that. I only tested it on HEAD, but
> AFAICS it applies cleanly to 9.4 and 9.3; fairly sure it won't apply to
> 9.2. Given the lack of complaints, I'm unsure about backpatching
> further back than 9.3 anyway.
FWIW my intention is to make sur
"Joshua D. Drake" writes:
> While it is obvious what is happening in $SUBJECT as well as reasonably
> obvious why it can happen. What isn't obvious is what to do about it. It
> seems we log in as a super user and drop the temp tables.
You don't need to do anything --- the table will go away the
Joshua D. Drake wrote:
Hi,
> While it is obvious what is happening in $SUBJECT as well as
> reasonably obvious why it can happen. What isn't obvious is what to
> do about it. It seems we log in as a super user and drop the temp
> tables.
>
> However, I would think if we know that it is orphaned
Hello,
While it is obvious what is happening in $SUBJECT as well as reasonably
obvious why it can happen. What isn't obvious is what to do about it. It
seems we log in as a super user and drop the temp tables.
However, I would think if we know that it is orphaned that autovacuum
should just
On Mon, Jun 23, 2014 at 7:19 PM, Tom Lane wrote:
> Jeff Janes writes:
>> I didn't add this patch to the commitfest, because it was just a point
>> for discussion and not actually proposed for application. But It
>> doesn't seem to have provoked much discussion either.
>
>> Should I go add this t
Jeff Janes writes:
> I didn't add this patch to the commitfest, because it was just a point
> for discussion and not actually proposed for application. But It
> doesn't seem to have provoked much discussion either.
> Should I go add this to the next commitfest?
> I do see it listed as a resolve
On Thu, May 15, 2014 at 4:06 PM, Jeff Janes wrote:
> On Thu, May 15, 2014 at 12:55 PM, Alvaro Herrera
> wrote:
>>
>> Jeff Janes wrote:
>>
>> > If you have a database with a large table in it that has just passed
>> > autovacuum_freeze_max_age, all future workers will be funnelled into
>> > that
>
Jeff Janes wrote:
> If you have a database with a large table in it that has just passed
> autovacuum_freeze_max_age, all future workers will be funnelled into that
> database until the wrap-around completes. But only one of those workers
> can actually vacuum the one table which is holding back
In testing 9.4 with some long running tests, I noticed that autovacuum
launcher/worker sometimes goes a bit nuts. It vacuums the same database
repeatedly without respect to the nap time.
As far as I can tell, the behavior is the same in older versions, but I
haven't tested that.
This is my under
Alvaro Herrera wrote:
> Joshua D. Drake escribió:
>> That is what is confusing me, I could be cracked but messages like these:
>>
>> automatic vacuum of table "pg_catalog.pg_attribute": could not
>> (re)acquire exclusive lock for truncate scan
>>
>> Seem to be new?
>
> Yeah, those are new.
Yeah,
1 - 100 of 964 matches
Mail list logo