On Mon, Mar 16, 2015 at 11:46 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Mar 15, 2015 at 8:04 PM, Rui DeSousa r...@crazybean.net wrote:
Would a parameter to auto terminate long running transactions be a
better solution? Terminating the long running transaction would allow for
the
On Sun, Mar 15, 2015 at 8:04 PM, Rui DeSousa r...@crazybean.net wrote:
Would a parameter to auto terminate long running transactions be a better
solution? Terminating the long running transaction would allow for the normal
vacuuming process to cleanup the deleted records thus avoiding
Would a parameter to auto terminate long running transactions be a better
solution? Terminating the long running transaction would allow for the normal
vacuuming process to cleanup the deleted records thus avoiding database bloat
without introducing new semantics to Postgres's MVCC. I would
On 16/03/15 13:04, Rui DeSousa wrote:
Would a parameter to auto terminate long running transactions be a better
solution? Terminating the long running transaction would allow for the normal
vacuuming process to cleanup the deleted records thus avoiding database bloat
without introducing new
On 17 February 2015 at 06:35, Magnus Hagander mag...@hagander.net wrote:
On Feb 17, 2015 12:26 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
It seems we already have a mechanism in place that allows tuning of
query cancel on standbys vs.
Amit Kapila amit.kapil...@gmail.com wrote:
Could you please explain in slightly more detail why can't it work if we use
timestamp instead of snapshot-xmin in your patch in
function TestForOldSnapshot()?
It works fine for the additional visibility checking in scans, but
it doesn't cover the
On Sun, Feb 22, 2015 at 10:18 PM, Kevin Grittner kgri...@ymail.com wrote:
Amit Kapila amit.kapil...@gmail.com wrote:
It seems to me that SQL Server also uses similar mechanism to
avoid the bloat in version store (place to store previous
versions or record).
I think if other leading
On Mon, Feb 16, 2015 at 10:49 AM, Kevin Grittner kgri...@ymail.com wrote:
What this discussion has made me reconsider is the metric for
considering a transaction too old. The number of transaction IDs
consumed seems inferior as the unit of measure for that to LSN or
time.
It looks to me to
Amit Kapila amit.kapil...@gmail.com wrote:
It seems to me that SQL Server also uses similar mechanism to
avoid the bloat in version store (place to store previous
versions or record).
I think if other leading databases provide a way to control the
bloat, it indicates that most of the
On 02/22/2015 11:48 AM, Kevin Grittner wrote:
(2) Use a course enough granularity on time and a short enough
maximum for the GUC to just keep a circular buffer of the mappings
in memory. We might be able to make this dense enough that one
minute resolution for up to 60 days could fit in
Stephen Frost sfr...@snowman.net wrote:
Kevin Grittner (kgri...@ymail.com) wrote:
With the two patches I submitted, bloat stabilized at less than 5%
except for some btree indexes which followed pattern of inserting
at the end and deleting most (but not all) of the entries over
time. I've
Ants Aasma a...@cybertec.at wrote:
If I understood the issue correctly, you have long running snapshots
accessing one part of the data, while you have high churn on a
disjoint part of data. We would need to enable vacuum on the high
churn data while still being able to run those long queries.
On 02/19/2015 03:31 PM, Kevin Grittner wrote:
What about having the long running snapshots declare their working
set, and then only take them into account for global xmin for
relations that are in the working set? Like a SET TRANSACTION WORKING
SET command. This way the error is deterministic,
On Wed, Feb 18, 2015 at 4:57 PM, Kevin Grittner kgri...@ymail.com wrote:
But max_standby_streaming_delay, max_standby_archive_delay and
hot_standby_feedback are among the most frequent triggers for
questions and complaints that I/we see.
Agreed.
And a really bad one used to be
Andrew Dunstan and...@dunslane.net wrote:
On 02/19/2015 09:44 AM, Kevin Grittner wrote:
I understand why this make people nervous. I wonder if it might be more
palatable if there were a per-table setting that could enable it? If we
could ensure that this was only applied to high churn queue
Rod Taylor rod.tay...@gmail.com wrote:
Would pg_dump be able to opt-out of such a restriction?
I don't see how, since vacuum would be removing recently dead
tuples that are still visible; the alternative to getting a
snapshot too old error when reading a page which could be
affected is to
On Thu, Feb 19, 2015 at 6:01 PM, Kevin Grittner kgri...@ymail.com wrote:
I can see how they would be, provided we can be confident that we're
going to actually throw an error when the snapshot is out of date and
not end up returning incorrect results. We need to be darn sure of
that, both now
On Thu, Feb 19, 2015 at 3:44 PM, Kevin Grittner kgri...@ymail.com wrote:
I'm also interested in handling the case Stephen Frost described, where
a tuple is effectively dead but we don't currently have the means of
discovering the fact, because there is an older long running transaction
which
On Feb 19, 2015 10:31 PM, Kevin Grittner kgri...@ymail.com wrote:
What about having the long running snapshots declare their working
set, and then only take them into account for global xmin for
relations that are in the working set? Like a SET TRANSACTION WORKING
SET command. This way the
Stephen Frost sfr...@snowman.net wrote:
Kevin Grittner (kgri...@ymail.com) wrote:
Stephen Frost sfr...@snowman.net wrote:
In the end, with a single long-running transaction, the worst bloat you
would have is double the size of the system at the time the long-running
transaction started.
I
On 02/19/2015 09:44 AM, Kevin Grittner wrote:
On the 15th I said this:
| What this discussion has made me reconsider is the metric for
| considering a transaction too old. The number of transaction IDs
| consumed seems inferior as the unit of measure for that to LSN or
| time.
|
| It looks
Greg Stark st...@mit.edu wrote:
On Sun, Feb 15, 2015 at 8:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
There might be something in that, but again it's not much like
this patch. The key point I think we're both making is that
nondeterministic failures are bad, especially when you're
talking
Kevin,
* Kevin Grittner (kgri...@ymail.com) wrote:
Stephen Frost sfr...@snowman.net wrote:
Kevin Grittner (kgri...@ymail.com) wrote:
(1) They have a pool of connections each of which can have several
long-running cursors, so the limit from that isn't just doubling
the size of their
On 2/19/15 1:54 PM, Kevin Grittner wrote:
Rod Taylor rod.tay...@gmail.com wrote:
Would pg_dump be able to opt-out of such a restriction?
I don't see how, since vacuum would be removing recently dead
tuples that are still visible; the alternative to getting a
snapshot too old error when
Magnus Hagander mag...@hagander.net wrote:
On Feb 17, 2015 12:26 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
It seems we already have a mechanism in place that allows
tuning of query cancel on standbys vs. preventing standby
queries from
Kevin,
* Kevin Grittner (kgri...@ymail.com) wrote:
Magnus Hagander mag...@hagander.net wrote:
On Feb 17, 2015 12:26 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
It seems we already have a mechanism in place that allows
tuning of
On Wed, Feb 18, 2015 at 10:57 PM, Kevin Grittner kgri...@ymail.com wrote:
Magnus Hagander mag...@hagander.net wrote:
On Feb 17, 2015 12:26 AM, Andres Freund and...@2ndquadrant.com
wrote:
On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
But max_standby_streaming_delay,
Stephen Frost sfr...@snowman.net wrote:
I also agree with the general idea that it makes sense to provide a way
to control bloat, but I think you've missed what Andres was getting at
with his suggestion (if I understand correctly, apologies if I don't).
The problem is that we're only looking
On Sun, Feb 15, 2015 at 8:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:
There might be something in that, but again it's not much like this patch.
The key point I think we're both making is that nondeterministic failures
are bad, especially when you're talking about long-running, expensive-to-
* Kevin Grittner (kgri...@ymail.com) wrote:
Stephen Frost sfr...@snowman.net wrote:
I also agree with the general idea that it makes sense to provide a way
to control bloat, but I think you've missed what Andres was getting at
with his suggestion (if I understand correctly, apologies if I
On Sun, Feb 15, 2015 at 11:36:50AM -0500, Tom Lane wrote:
I wonder if we couldn't achieve largely the same positive effects through
adding a simple transaction-level timeout option. That would be far
easier for users to understand and manage, it would be trivial to allow
specific high-value
On 2/16/15 2:41 AM, Andres Freund wrote:
Since you actually don't want cancellations for the long running
reporting queries it very much might be sensible to switch to a more
complicated version of HeapTupleSatisfiesVacuum() if there's longrunning
queries. One that can recognize if rows are
On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
It seems we already have a mechanism in place that allows tuning of
query cancel on standbys vs. preventing standby queries from seeing old
data, specifically
max_standby_streaming_delay/max_standby_archive_delay. We obsessed
about how users
On Feb 17, 2015 12:26 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2015-02-16 16:35:46 -0500, Bruce Momjian wrote:
It seems we already have a mechanism in place that allows tuning of
query cancel on standbys vs. preventing standby queries from seeing old
data, specifically
On 2015-02-16 05:19:11 +, Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Jim Nasby jim.na...@bluetreble.com writes:
On 2/15/15 10:36 AM, Tom Lane wrote:
I wonder if we couldn't achieve largely the same positive effects through
adding a simple transaction-level timeout
Kevin Grittner kgri...@ymail.com writes:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kgri...@ymail.com writes:
What this patch does is add a GUC call old_snapshot_threshold. It
defaults to -1, which leaves behavior matching unpatched code.
Above that it allows tuples to be vacuumed
Jim Nasby jim.na...@bluetreble.com writes:
On 2/15/15 10:36 AM, Tom Lane wrote:
I wonder if we couldn't achieve largely the same positive effects through
adding a simple transaction-level timeout option.
A common use-case is long-running reports hitting relatively stable data
in a database
On 2/15/15 10:36 AM, Tom Lane wrote:
Kevin Grittner kgri...@ymail.com writes:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kgri...@ymail.com writes:
What this patch does is add a GUC call old_snapshot_threshold. It
defaults to -1, which leaves behavior matching unpatched code.
Above
Tom Lane t...@sss.pgh.pa.us wrote:
Jim Nasby jim.na...@bluetreble.com writes:
On 2/15/15 10:36 AM, Tom Lane wrote:
I wonder if we couldn't achieve largely the same positive effects through
adding a simple transaction-level timeout option.
We suggested this to our customer and got out of the
This patch is related to the Reduce pinning in btree indexes
patch submitted here:
http://www.postgresql.org/message-id/721615179.3351449.1423959585771.javamail.ya...@mail.yahoo.com
That describes how they evolved and how they relate; I won't
duplicate that here.
Unlike the other patch, this
Kevin Grittner kgri...@ymail.com writes:
What this patch does is add a GUC call old_snapshot_threshold. It
defaults to -1, which leaves behavior matching unpatched code.
Above that it allows tuples to be vacuumed away after the number of
transaction IDs specified by the GUC have been
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kgri...@ymail.com writes:
What this patch does is add a GUC call old_snapshot_threshold. It
defaults to -1, which leaves behavior matching unpatched code.
Above that it allows tuples to be vacuumed away after the number of
transaction IDs
42 matches
Mail list logo