Hi,

I am currently playing around with Robert's suggestion to get rid of
changeset extraction's reusage of SnapshotData fields (basically that
xip contains committed, not uncommited transactions) by using NodeTag
similar to many other (families of) structs.

While reading around which references to SnapshotData's members exist, I
once more came about the following tidbit in heapgetpage():
        /*
         * If the all-visible flag indicates that all tuples on the page are
         * visible to everyone, we can skip the per-tuple visibility tests.
         *
         * Note: In hot standby, a tuple that's already visible to all
         * transactions in the master might still be invisible to a read-only
         * transaction in the standby. We partly handle this problem by tracking
         * the minimum xmin of visible tuples as the cut-off XID while marking a
         * page all-visible on master and WAL log that along with the visibility
         * map SET operation. In hot standby, we wait for (or abort) all
         * transactions that can potentially may not see one or more tuples on 
the
         * page. That's how index-only scans work fine in hot standby. A crucial
         * difference between index-only scans and heap scans is that the
         * index-only scan completely relies on the visibility map where as heap
         * scan looks at the page-level PD_ALL_VISIBLE flag. We are not sure if
         * the page-level flag can be trusted in the same way, because it might
         * get propagated somehow without being explicitly WAL-logged, e.g. via 
a
         * full page write. Until we can prove that beyond doubt, let's check 
each
         * tuple for visibility the hard way.
         */
        all_visible = PageIsAllVisible(dp) && !snapshot->takenDuringRecovery;

I don't think this is neccessary >= 9.2. The are two only "interestings" place
where PD_ALL_VISIBLE is set:
a) lazy_vacuum_page() where a xl_heap_clean is logged *before*
   PD_ALL_VISIBLE/the vm is touched and that causes recovery
   conflicts. The heap page is locked for cleanup at that point. As the
   logging of xl_heap_clean sets the page's LSN there's no way the page
   can appear on the standby too early.
b) empty pages in lazy_scan_heap(). If they always were empty, there's
   no need for conflicts. The only other way I can see to end up there
   is a previous heap_page_prune() that repaired fragmentation. But that
   logs a WAL record with conflict information.

So, we could just remove this?

Greetings,

Andres Freund

-- 
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to