Do not DROP default roles in pg_dumpall -c
When pulling the list of roles to drop, exclude roles whose names
begin with "pg_" (as we do when we are dumping the roles out to
recreate them).
Also add regression tests to cover pg_dumpall -c and this specific
issue.
Noticed by Rushabh Lathia. Patch
Mark wal_level as PGDLLIMPORT.
Per buildfarm, this is needed to allow extensions to use XLogIsNeeded()
in Windows builds.
Branch
--
master
Details
---
http://git.postgresql.org/pg/commitdiff/f5e7b2f910b7cdb51b7369c76627998432ab6821
Modified Files
--
src/include/access/xlog.h
Fix contrib/bloom to work for unlogged indexes.
blbuildempty did not do even approximately the right thing: it tried
to add a metapage to the relation's regular data fork, which already
has one at that point. It should look like the ambuildempty methods
for all the standard index types, ie, initi
Qualify table usage in dumpTable() and use regclass
All of the other tables used in the query in dumpTable(), which is
collecting column-level ACLs, are qualified, so we should be qualifying
the pg_init_privs, the related sub-select against pg_class and the
other queries added by the pg_dump catal
On 2016-05-24 16:09:27 -0500, Kevin Grittner wrote:
> On Tue, May 24, 2016 at 3:54 PM, Andres Freund wrote:
>
> > what about e.g. concurrent index builds? E.g. IndexBuildHeapRangeScan()
> > doesn't
> > seem to contain any checks against outdated blocks
>
> Why would it? We're talking about block
On Tue, May 24, 2016 at 4:09 PM, Kevin Grittner wrote:
> On Tue, May 24, 2016 at 3:54 PM, Andres Freund wrote:
>> It appears that concurrent index builds are currently broken
>> from a quick skim?
>
> Either you don't understand this feature very well, or I don't
> understand concurrent index bu
On Tue, May 24, 2016 at 4:10 PM, Robert Haas wrote:
> For purposes of
> "snapshot too old", though, it will be important that a function in an
> index which tries to read data from some other table which has been
> pruned cancels itself when necessary.
Hm. I'll try to work up a test case for th
On Tue, May 24, 2016 at 3:48 PM, Kevin Grittner wrote:
> On Tue, May 24, 2016 at 12:00 PM, Andres Freund wrote:
>> On 2016-05-24 11:24:44 -0500, Kevin Grittner wrote:
>>> On Fri, May 6, 2016 at 8:28 PM, Kevin Grittner wrote:
On Fri, May 6, 2016 at 7:48 PM, Andres Freund wrote:
>>>
> Th
On Tue, May 24, 2016 at 3:54 PM, Andres Freund wrote:
> what about e.g. concurrent index builds? E.g. IndexBuildHeapRangeScan()
> doesn't
> seem to contain any checks against outdated blocks
Why would it? We're talking about blocks where there were dead
tuples, with the transaction which updat
On 2016-05-24 14:48:35 -0500, Kevin Grittner wrote:
> On Tue, May 24, 2016 at 12:00 PM, Andres Freund wrote:
> > On 2016-05-24 11:24:44 -0500, Kevin Grittner wrote:
> >> On Fri, May 6, 2016 at 8:28 PM, Kevin Grittner wrote:
> >>> On Fri, May 6, 2016 at 7:48 PM, Andres Freund wrote:
> >>
> T
Kevin Grittner wrote:
> On Tue, May 24, 2016 at 12:00 PM, Andres Freund wrote:
> > On 2016-05-24 11:24:44 -0500, Kevin Grittner wrote:
> >> On Fri, May 6, 2016 at 8:28 PM, Kevin Grittner wrote:
> >>> On Fri, May 6, 2016 at 7:48 PM, Andres Freund wrote:
> >>
> That comment reminds me of a qu
Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation.
Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation.
Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation.
Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation.
Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation.
On Tue, May 24, 2016 at 12:00 PM, Andres Freund wrote:
> On 2016-05-24 11:24:44 -0500, Kevin Grittner wrote:
>> On Fri, May 6, 2016 at 8:28 PM, Kevin Grittner wrote:
>>> On Fri, May 6, 2016 at 7:48 PM, Andres Freund wrote:
>>
That comment reminds me of a question I had: Did you consider the
Fetch XIDs atomically during vac_truncate_clog().
Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid
in-place, it's unsafe to assume that successive reads of those values will
give consistent results. Fetch each one just once to ensure sane behavior
in the minimum calculation.
Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assign
Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assign
Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assign
Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assign
Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assign
Avoid consuming an XID during vac_truncate_clog().
vac_truncate_clog() uses its own transaction ID as the comparison point in
a sanity check that no database's datfrozenxid has already wrapped around
"into the future". That was probably fine when written, but in a lazy
vacuum we won't have assign
Fix range check for effective_io_concurrency
Commit 1aba62ec moved the range check of that option form guc.c into
bufmgr.c, but introduced a bug by changing a >= 0.0 to > 0.0, which made
the value 0 no longer accepted. Put it back.
Reported by Jeff Janes, diagnosed by Tom Lane
Branch
--
mas
On 2016-05-24 13:04:09 -0500, Kevin Grittner wrote:
> On Tue, May 24, 2016 at 12:00 PM, Andres Freund wrote:
>
> > Analyze IIRC acquires a new snapshot when getting sample rows,
>
> I could not find anything like that, and a case-insensitive search
> of analyze.c finds no occurrences of "snap".
Kevin Grittner wrote:
> On Tue, May 24, 2016 at 12:00 PM, Andres Freund wrote:
>
> > Analyze IIRC acquires a new snapshot when getting sample rows,
>
> I could not find anything like that, and a case-insensitive search
> of analyze.c finds no occurrences of "snap". Can you remember
> where you
Kevin Grittner writes:
> On Tue, May 24, 2016 at 12:00 PM, Andres Freund wrote:
>> Analyze IIRC acquires a new snapshot when getting sample rows,
> I could not find anything like that, and a case-insensitive search
> of analyze.c finds no occurrences of "snap". Can you remember
> where you thin
On Tue, May 24, 2016 at 12:00 PM, Andres Freund wrote:
> Analyze IIRC acquires a new snapshot when getting sample rows,
I could not find anything like that, and a case-insensitive search
of analyze.c finds no occurrences of "snap". Can you remember
where you think you saw something that would c
Docs: mention pg_reload_conf() in ALTER SYSTEM reference page.
Takayuki Tsunakawa
Discussion: <0A3221C70F24FB45833433255569204D1F578FC3@G01JPEXMBYT05>
Branch
--
master
Details
---
http://git.postgresql.org/pg/commitdiff/c45fb43c8448c5b710d4ef9774497e1789e070e5
Modified Files
--
In examples of Oracle PL/SQL code, use varchar2 not varchar.
Oracle recommends using VARCHAR2 not VARCHAR, allegedly because they might
someday change VARCHAR to be spec-compliant about distinguishing null from
empty string. (I'm not holding my breath, though.) Our examples of PL/SQL
code were u
On 2016-05-24 11:24:44 -0500, Kevin Grittner wrote:
> On Fri, May 6, 2016 at 8:28 PM, Kevin Grittner wrote:
> > On Fri, May 6, 2016 at 7:48 PM, Andres Freund wrote:
>
> >> That comment reminds me of a question I had: Did you consider the effect
> >> of this patch on analyze? It uses a snapshot,
On Fri, May 6, 2016 at 8:28 PM, Kevin Grittner wrote:
> On Fri, May 6, 2016 at 7:48 PM, Andres Freund wrote:
>> That comment reminds me of a question I had: Did you consider the effect
>> of this patch on analyze? It uses a snapshot, and by memory you've not
>> built in a defense against analyze
Fix typo in docs
Add missing USING BLOOM in example of contrib/bloom
Nikolay Shaplov
Branch
--
master
Details
---
http://git.postgresql.org/pg/commitdiff/6ee7fb8244560b7a3f224784b8ad2351107fa55d
Modified Files
--
doc/src/sgml/bloom.sgml | 2 +-
1 file changed, 1 insertion(+)
34 matches
Mail list logo