On Mon, Mar 16, 2015 at 11:46 AM, Robert Haas wrote:
> On Sun, Mar 15, 2015 at 8:04 PM, Rui DeSousa wrote:
> > Would a parameter to auto terminate long running transactions be a
> better solution? Terminating the long running transaction would allow for
> the normal vacuuming process to cleanup
On Sun, Mar 15, 2015 at 8:04 PM, Rui DeSousa wrote:
> Would a parameter to auto terminate long running transactions be a better
> solution? Terminating the long running transaction would allow for the normal
> vacuuming process to cleanup the deleted records thus avoiding database bloat
> witho
On 16/03/15 13:04, Rui DeSousa wrote:
Would a parameter to auto terminate long running transactions be a better
solution? Terminating the long running transaction would allow for the normal
vacuuming process to cleanup the deleted records thus avoiding database bloat
without introducing new se
Would a parameter to auto terminate long running transactions be a better
solution? Terminating the long running transaction would allow for the normal
vacuuming process to cleanup the deleted records thus avoiding database bloat
without introducing new semantics to Postgres's MVCC. I would also
On 17 February 2015 at 06:35, Magnus Hagander wrote:
>
> On Feb 17, 2015 12:26 AM, "Andres Freund" wrote:
>>
>> On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
>> > It seems we already have a mechanism in place that allows tuning of
>> > query cancel on standbys vs. preventing standby queries
Amit Kapila wrote:
> Could you please explain in slightly more detail why can't it work> if we use
> timestamp instead of snapshot->xmin in your patch in
> function TestForOldSnapshot()?
It works fine for the additional visibility checking in scans, but
it doesn't cover the vacuuming -- that n
On Sun, Feb 22, 2015 at 10:18 PM, Kevin Grittner wrote:
>
> Amit Kapila wrote:
>
> > It seems to me that SQL Server also uses similar mechanism to
> > avoid the bloat in version store (place to store previous
> > versions or record).
>
> > I think if other leading databases provide a way to contr
On 02/22/2015 11:48 AM, Kevin Grittner wrote:
(2) Use a course enough granularity on time and a short enough
maximum for the GUC to just keep a circular buffer of the mappings
in memory. We might be able to make this dense enough that one
minute resolution for up to 60 days could fit in 338kB.
Amit Kapila wrote:
> It seems to me that SQL Server also uses similar mechanism to
> avoid the bloat in version store (place to store previous
> versions or record).
> I think if other leading databases provide a way to control the
> bloat, it indicates that most of the customers having
> write-
On Mon, Feb 16, 2015 at 10:49 AM, Kevin Grittner wrote:
>
> What this discussion has made me reconsider is the metric for
> considering a transaction "too old". The number of transaction IDs
> consumed seems inferior as the unit of measure for that to LSN or
> time.
>
> It looks to me to be prett
Stephen Frost wrote:
> Kevin Grittner (kgri...@ymail.com) wrote:
>> With the two patches I submitted, bloat stabilized at less than 5%
>> except for some btree indexes which followed pattern of inserting
>> at the end and deleting most (but not all) of the entries over
>> time. I've been working
On 2/19/15 1:54 PM, Kevin Grittner wrote:
> Rod Taylor wrote:
>
>> Would pg_dump be able to opt-out of such a restriction?
>
> I don't see how, since vacuum would be removing recently dead
> tuples that are still visible; the alternative to getting a
> "snapshot too old" error when reading a pag
Kevin,
* Kevin Grittner (kgri...@ymail.com) wrote:
> Stephen Frost wrote:
> > Kevin Grittner (kgri...@ymail.com) wrote:
> >> (1) They have a pool of connections each of which can have several
> >> long-running cursors, so the limit from that isn't just doubling
> >> the size of their database, i
On Feb 19, 2015 10:31 PM, "Kevin Grittner" wrote:
> > What about having the long running snapshots declare their working
> > set, and then only take them into account for global xmin for
> > relations that are in the working set? Like a SET TRANSACTION WORKING
> > SET command. This way the error i
On 02/19/2015 03:31 PM, Kevin Grittner wrote:
What about having the long running snapshots declare their working
set, and then only take them into account for global xmin for
relations that are in the working set? Like a SET TRANSACTION WORKING
SET command. This way the error is deterministic,
Ants Aasma wrote:
> If I understood the issue correctly, you have long running snapshots
> accessing one part of the data, while you have high churn on a
> disjoint part of data. We would need to enable vacuum on the high
> churn data while still being able to run those long queries. The
> "snaps
Rod Taylor wrote:
> Would pg_dump be able to opt-out of such a restriction?
I don't see how, since vacuum would be removing recently dead
tuples that are still visible; the alternative to getting a
"snapshot too old" error when reading a page which could be
affected is to return incorrect result
On Thu, Feb 19, 2015 at 3:44 PM, Kevin Grittner wrote:
>> I'm also interested in handling the case Stephen Frost described, where
>> a tuple is effectively dead but we don't currently have the means of
>> discovering the fact, because there is an older long running transaction
>> which is never in
Andrew Dunstan wrote:
> On 02/19/2015 09:44 AM, Kevin Grittner wrote:
> I understand why this make people nervous. I wonder if it might be more
> palatable if there were a per-table setting that could enable it? If we
> could ensure that this was only applied to high churn queue tables, say,
> wh
On Thu, Feb 19, 2015 at 6:01 PM, Kevin Grittner wrote:
>> I can see how they would be, provided we can be confident that we're
>> going to actually throw an error when the snapshot is out of date and
>> not end up returning incorrect results. We need to be darn sure of
>> that, both now and in a
On Wed, Feb 18, 2015 at 4:57 PM, Kevin Grittner wrote:
> >> But max_standby_streaming_delay, max_standby_archive_delay and
> >> hot_standby_feedback are among the most frequent triggers for
> >> questions and complaints that I/we see.
> >>
> > Agreed.
> > And a really bad one used to be vacuum_de
On 02/19/2015 09:44 AM, Kevin Grittner wrote:
On the 15th I said this:
| What this discussion has made me reconsider is the metric for
| considering a transaction "too old". The number of transaction IDs
| consumed seems inferior as the unit of measure for that to LSN or
| time.
|
| It looks
Stephen Frost wrote:
> Kevin Grittner (kgri...@ymail.com) wrote:
>> Stephen Frost wrote:
>>> In the end, with a single long-running transaction, the worst bloat you
>>> would have is double the size of the system at the time the long-running
>>> transaction started.
>>
>> I agree that limiting b
Greg Stark wrote:
> On Sun, Feb 15, 2015 at 8:27 PM, Tom Lane wrote:
>> There might be something in that, but again it's not much like
>> this patch. The key point I think we're both making is that
>> nondeterministic failures are bad, especially when you're
>> talking about long-running, expen
On Wed, Feb 18, 2015 at 10:57 PM, Kevin Grittner wrote:
> Magnus Hagander wrote:
> > On Feb 17, 2015 12:26 AM, "Andres Freund"
> wrote:
> >> On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
> >> But max_standby_streaming_delay, max_standby_archive_delay and
> >> hot_standby_feedback are among
* Kevin Grittner (kgri...@ymail.com) wrote:
> Stephen Frost wrote:
> > I also agree with the general idea that it makes sense to provide a way
> > to control bloat, but I think you've missed what Andres was getting at
> > with his suggestion (if I understand correctly, apologies if I don't).
> >
>
On Sun, Feb 15, 2015 at 8:27 PM, Tom Lane wrote:
> There might be something in that, but again it's not much like this patch.
> The key point I think we're both making is that nondeterministic failures
> are bad, especially when you're talking about long-running, expensive-to-
> retry transaction
Stephen Frost wrote:
> I also agree with the general idea that it makes sense to provide a way
> to control bloat, but I think you've missed what Andres was getting at
> with his suggestion (if I understand correctly, apologies if I don't).
>
> The problem is that we're only looking at the overal
Kevin,
* Kevin Grittner (kgri...@ymail.com) wrote:
> Magnus Hagander wrote:
> > On Feb 17, 2015 12:26 AM, "Andres Freund" wrote:
> >> On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
>
> >>> It seems we already have a mechanism in place that allows
> >>> tuning of query cancel on standbys vs.
Magnus Hagander wrote:
> On Feb 17, 2015 12:26 AM, "Andres Freund" wrote:
>> On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
>>> It seems we already have a mechanism in place that allows
>>> tuning of query cancel on standbys vs. preventing standby
>>> queries from seeing old data, specifical
On Feb 17, 2015 12:26 AM, "Andres Freund" wrote:
>
> On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
> > It seems we already have a mechanism in place that allows tuning of
> > query cancel on standbys vs. preventing standby queries from seeing old
> > data, specifically
> > max_standby_streami
On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
> It seems we already have a mechanism in place that allows tuning of
> query cancel on standbys vs. preventing standby queries from seeing old
> data, specifically
> max_standby_streaming_delay/max_standby_archive_delay. We obsessed
> about how u
On Sun, Feb 15, 2015 at 11:36:50AM -0500, Tom Lane wrote:
> I wonder if we couldn't achieve largely the same positive effects through
> adding a simple transaction-level timeout option. That would be far
> easier for users to understand and manage, it would be trivial to allow
> specific high-valu
On 2/16/15 2:41 AM, Andres Freund wrote:
Since you actually don't want cancellations for the long running
reporting queries it very much might be sensible to switch to a more
complicated version of HeapTupleSatisfiesVacuum() if there's longrunning
queries. One that can recognize if rows are actua
On 2015-02-16 05:19:11 +, Kevin Grittner wrote:
> Tom Lane wrote:
> > Jim Nasby writes:
> >> On 2/15/15 10:36 AM, Tom Lane wrote:
> >>> I wonder if we couldn't achieve largely the same positive effects through
> >>> adding a simple transaction-level timeout option.
>
> We suggested this to o
Tom Lane wrote:
> Jim Nasby writes:
>> On 2/15/15 10:36 AM, Tom Lane wrote:
>>> I wonder if we couldn't achieve largely the same positive effects through
>>> adding a simple transaction-level timeout option.
We suggested this to our customer and got out of the meeting with
it looking like it *mi
Jim Nasby writes:
> On 2/15/15 10:36 AM, Tom Lane wrote:
>> I wonder if we couldn't achieve largely the same positive effects through
>> adding a simple transaction-level timeout option.
> A common use-case is long-running reports hitting relatively stable data
> in a database that also has tabl
On 2/15/15 10:36 AM, Tom Lane wrote:
Kevin Grittner writes:
Tom Lane wrote:
Kevin Grittner writes:
What this patch does is add a GUC call old_snapshot_threshold. It
defaults to -1, which leaves behavior matching unpatched code.
Above that it allows tuples to be vacuumed away after the numb
Kevin Grittner writes:
> Tom Lane wrote:
>> Kevin Grittner writes:
>>> What this patch does is add a GUC call old_snapshot_threshold. It
>>> defaults to -1, which leaves behavior matching unpatched code.
>>> Above that it allows tuples to be vacuumed away after the number of
>>> transaction IDs
Tom Lane wrote:
> Kevin Grittner writes:
>> What this patch does is add a GUC call old_snapshot_threshold. It
>> defaults to -1, which leaves behavior matching unpatched code.
>> Above that it allows tuples to be vacuumed away after the number of
>> transaction IDs specified by the GUC have been
Kevin Grittner writes:
> What this patch does is add a GUC call old_snapshot_threshold. It
> defaults to -1, which leaves behavior matching unpatched code.
> Above that it allows tuples to be vacuumed away after the number of
> transaction IDs specified by the GUC have been consumed.
TBH, I'm no
This patch is related to the "Reduce pinning in btree indexes"
patch submitted here:
http://www.postgresql.org/message-id/721615179.3351449.1423959585771.javamail.ya...@mail.yahoo.com
That describes how they evolved and how they relate; I won't
duplicate that here.
Unlike the other patch, this o
42 matches
Mail list logo