I wrote:
> Dmitriy Sarafannikov writes:
>> [ snapshot_non_vacuumable_v3.patch ]
> In short I think we should just set up the threshold as RecentGlobalXmin.
Pushed with that adjustment and some fooling with the comments.
regards, tom lane
--
Dmitriy Sarafannikov writes:
> [ snapshot_non_vacuumable_v3.patch ]
Starting to look at this. I think that the business with choosing
RecentGlobalXmin vs. RecentGlobalDataXmin is just wrong. What we
want to do is accept any tuple that would not be considered killable
> On 18 Aug 2017, at 08:50, Andrey Borodin wrote:
>
> The following review has been posted through the commitfest application:
> make installcheck-world: tested, failed
> Implements feature: tested, failed
> Spec compliant: tested, failed
> Documentation:
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation:tested, passed
Oops, missed those checkboxes. Sorry for the noise. Here's
The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature: tested, failed
Spec compliant: tested, failed
Documentation:tested, failed
Hi! I've looked into the patch.
There is not so much of the
> Ok, i agree. Patch is attached.
I added a patch to the CF
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> + else \
> + (snapshotdata).xmin = \
> + TransactionIdLimitedForOldSnapshots(RecentGlobalDataXmin, \
> + relation); \
>
> I think we don't need to use TransactionIdLimitedForOldSnapshots() as
> that is required to override xmin for table vacuum/pruning purposes.
>
>> Maybe we need
>> to use
On Mon, May 8, 2017 at 6:30 PM, Dmitriy Sarafannikov
wrote:
>
> I think we can use RecentGlobalDataXmin for non-catalog relations and
> RecentGlobalXmin for catalog relations (probably a check similar to
> what we have in heap_page_prune_opt).
>
>
> I took check from
I think we can use RecentGlobalDataXmin for non-catalog relations andRecentGlobalXmin for catalog relations (probably a check similar towhat we have in heap_page_prune_opt).I took check from heap_page_prune_opt (Maybe this check must be present as separate function?)But it requires to initialize
On Fri, May 5, 2017 at 1:28 PM, Dmitriy Sarafannikov
wrote:
> Amit, thanks for comments!
>
>> 1.
>> +#define InitNonVacuumableSnapshot(snapshotdata) \
>> + do { \
>> + (snapshotdata).satisfies = HeapTupleSatisfiesNonVacuumable; \
>> + (snapshotdata).xmin =
Amit, thanks for comments!
> 1.
> +#define InitNonVacuumableSnapshot(snapshotdata) \
> + do { \
> + (snapshotdata).satisfies = HeapTupleSatisfiesNonVacuumable; \
> + (snapshotdata).xmin = RecentGlobalDataXmin; \
> + } while(0)
> +
>
> Can you explain and add comments why you think
On Thu, May 4, 2017 at 9:42 PM, Dmitriy Sarafannikov
wrote:
>
>> Maybe we need another type of snapshot that would accept any
>> non-vacuumable tuple. I really don't want SnapshotAny semantics here,
>> but a tuple that was live more recently than the xmin horizon seems
> Maybe we need another type of snapshot that would accept any
> non-vacuumable tuple. I really don't want SnapshotAny semantics here,
> but a tuple that was live more recently than the xmin horizon seems
> like it's acceptable enough. HeapTupleSatisfiesVacuum already
> implements the right
Hi.
> 25 апр. 2017 г., в 18:13, Dmitriy Sarafannikov
> написал(а):
>
> I'd like to propose to search min and max value in index with SnapshotAny in
> get_actual_variable_range function.
> Current implementation scans index with SnapshotDirty which accepts
>
> If that is the case, then how would using SnapshotAny solve this
> problem. We get the value from index first and then check its
> visibility in heap, so if time is spent in _bt_checkkeys, why would
> using a different kind of Snapshot solve the problem?
1st scanning on the index with
> 29 апр. 2017 г., в 17:34, Tom Lane написал(а):
>
> Dmitriy Sarafannikov writes:
>>> Maybe we need another type of snapshot that would accept any
>>> non-vacuumable tuple. I really don't want SnapshotAny semantics here,
>
>> If I understood
On Fri, Apr 28, 2017 at 10:02 PM, Dmitriy Sarafannikov
wrote:
>
> What I'm thinking of is the regular indexscan that's done internally
> by get_actual_variable_range, not whatever ends up getting chosen as
> the plan for the user query. I had supposed that that would
Dmitriy Sarafannikov writes:
>> Maybe we need another type of snapshot that would accept any
>> non-vacuumable tuple. I really don't want SnapshotAny semantics here,
> If I understood correctly, this new type of snapshot would help if
> there are long running
> Maybe we need another type of snapshot that would accept any
> non-vacuumable tuple. I really don't want SnapshotAny semantics here,
> but a tuple that was live more recently than the xmin horizon seems
> like it's acceptable enough. HeapTupleSatisfiesVacuum already
> implements the right
On Fri, Apr 28, 2017 at 3:00 PM, Tom Lane wrote:
> You are confusing number of tuples in the index, which we estimate from
> independent measurements such as the file size, with endpoint value,
> which is used for purposes like guessing whether a mergejoin will be
> able to
Robert Haas writes:
> On Fri, Apr 28, 2017 at 12:12 PM, Tom Lane wrote:
>> Maybe we need another type of snapshot that would accept any
>> non-vacuumable tuple. I really don't want SnapshotAny semantics here,
> I don't, in general, share your
On Fri, Apr 28, 2017 at 12:12 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Apr 27, 2017 at 5:22 PM, Tom Lane wrote:
>>> How so? Shouldn't the indexscan go back and mark such tuples dead in
>>> the index, such that they'd be
> What I'm thinking of is the regular indexscan that's done internally
> by get_actual_variable_range, not whatever ends up getting chosen as
> the plan for the user query. I had supposed that that would kill
> dead index entries as it went, but maybe that's not happening for
> some reason.
Robert Haas writes:
> On Thu, Apr 27, 2017 at 5:22 PM, Tom Lane wrote:
>> How so? Shouldn't the indexscan go back and mark such tuples dead in
>> the index, such that they'd be visited this way only once? If that's
>> not happening, maybe we should
On Thu, Apr 27, 2017 at 5:22 PM, Tom Lane wrote:
>>> But if we delete many rows from beginning or end of index, it would be
>>> very expensive too because we will fetch each dead row and reject it.
>
>> Yep, and I've seen that turn into a serious problem in production.
>
> How
Andres Freund writes:
> On 2017-04-27 17:22:25 -0400, Tom Lane wrote:
>> How so? Shouldn't the indexscan go back and mark such tuples dead in
>> the index, such that they'd be visited this way only once? If that's
>> not happening, maybe we should try to fix it.
> One way
On 2017-04-27 17:22:25 -0400, Tom Lane wrote:
> > Yep, and I've seen that turn into a serious problem in production.
>
> How so? Shouldn't the indexscan go back and mark such tuples dead in
> the index, such that they'd be visited this way only once? If that's
> not happening, maybe we should
Robert Haas writes:
> On Thu, Apr 27, 2017 at 4:08 AM, Dmitriy Sarafannikov
> wrote:
>> I'd like to propose to search min and max value in index with SnapshotAny in
>> get_actual_variable_range function.
> +1 from me, but Tom rejected that
On Thu, Apr 27, 2017 at 4:08 AM, Dmitriy Sarafannikov
wrote:
> I'd like to propose to search min and max value in index with SnapshotAny in
> get_actual_variable_range function.
+1 from me, but Tom rejected that approach last time.
> But if we delete many rows from
Hi hackers,I'd like to propose to search min and max value in index with SnapshotAny in get_actual_variable_range function.Current implementation scans index with SnapshotDirty which accepts uncommitted rows and rejects dead rows.In a code there is a comment about this: /* * In
30 matches
Mail list logo