Re: pg_stat_statements: more test coverage

2023-12-30 Thread Michael Paquier
On Sat, Dec 30, 2023 at 08:39:47PM +0100, Peter Eisentraut wrote:
> Ok, I have committed these two patches.

Please note that the buildfarm has turned red, as in:
https://buildfarm.postgresql.org/cgi-bin/show_stagxe_log.pl?nm=pipit=2023-12-31%2001%3A12%3A22=misc-check

pg_stat_statements's regression.diffs holds more details:
SELECT query FROM pg_stat_statements WHERE query LIKE '%t001%' OR query LIKE 
'%t098%' ORDER BY query;
 query
 
- select * from t001
  select * from t098
-(2 rows)
+(1 row)  
--
Michael


signature.asc
Description: PGP signature


Re: Build versionless .so for Android

2023-12-30 Thread Michael Paquier
On Sun, Dec 31, 2023 at 06:08:04AM +0100, Matthias Kuhn wrote:
> In order to ship libpq on Android, it needs to be built without a version
> suffix on the .so file.
> The attached patch will make sure that no version is added when built for
> Android.

FWIW, I have mixed feelings about patching the code to treat
android-linux as an exception while changing nothing in the
documentation to explain why this is required.  A custom patch may
serve you better here, and note that you did not even touch the meson
paths.  See libpq_c_args in src/interfaces/libpq/meson.build as one
example.  That's just my opinion, of course, still there are no
buildfarm members that would cover your patch.

> I was wondering if there are a) any comments on the approach and if I
> should be handed in for a commitfest (currently waiting for the cooldown
> period after account activation, I am not sure how long that is)

Such discussions are adapted in a commit fest entry.  No idea if there
is a cooldown period after account creation before being able to touch
the CF app contents, though.
--
Michael


signature.asc
Description: PGP signature


Re: Build versionless .so for Android

2023-12-30 Thread Tom Lane
Matthias Kuhn  writes:
> In order to ship libpq on Android, it needs to be built without a version
> suffix on the .so file.
> The attached patch will make sure that no version is added when built for
> Android.

This ... seems incredibly brain-dead.  Does Android really not cope
with version-to-version ABI changes?

regards, tom lane




Simplify documentation related to Windows builds

2023-12-30 Thread Michael Paquier
Hi all,

As promised around thread [1] that has moved the docs related to
Windows into a new sub-section for Visual, here is a follow-up to
improve portions of its documentation, for discussion in the next CF.

Some of my notes:
- How much does command line editing work on Windows?  When it came to
VS, I always got the impression that this never worked.  Andres in [2]
mentioned otherwise because meson makes that easier?
- ActiveState perl should not be recommended IMO, as being able to use
a perl binary requires one to *register* into their service for tokens
that can be used to kick perl sessions, last time I checked.  Two
alternatives:
-- MinGW perl binary.
-- strawberry perl (?) and Chocolatey.
-- MSYS
- http://www.mingw.org/ is a dead end.  This could be replaced by
links to https://www.mingw-w64.org/ instead?

At the end, the main issue that I have with this part of the
documentation is the lack of consistency leading to a confusing user
experience in the builds of Windows.  My recent impressions were that
Andrew has picked up Chocolatey in some (all?) of his buildfarm
animals with Strawberry Perl.  I've had a good experience with it,
FWIW, but Andres has also mentioned me a couple of weeks ago while in
Prague that Strawberry could lead to unpredictible results (sorry I
don't remember all the exact details).

Between MSYS2, mingw-w64 and Chocolatey, there are a lot of options
available to users.  So shouldn't we try to recommend only of them,
then align the buildfarm and the CI to use one of them?  Supporting
more than one, at most two, would be OK for me, my end goal would be
to get rid entirely of the list of build dependencies in this "Visual"
section, because that's just a duplicate of what meson lists, except
that meson should do a better job at detecting dependencies than what
the now-dead MSVC scripts did.  If we support two, the CI and the
buildfarm should run them.

I am attaching a patch that's an embryon of work (little time for
hacking as of life these days, still I wanted to get the discussion
started), but let's discuss which direction we should take moving
forward for 17~.

Thanks,

[1]: https://www.postgresql.org/message-id/zqzp_vmjcerm1...@paquier.xyz
[2]: 
https://www.postgresql.org/message-id/20231116010703.gtjyepw7jsvyg...@awork3.anarazel.de
--
Michael
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index bb55695300..40a030f62b 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -3727,7 +3727,7 @@ xcrun --show-sdk-path
 MinGW, the Unix-like build tools, and MSYS, a collection
 of Unix tools required to run shell scripts
 like configure, can be downloaded
-from http://www.mingw.org/;>.  Neither is
+from https://www.mingw-w64.org/;>.  Neither is
 required to run the resulting binaries; they are needed only for
 creating the binaries.

@@ -3892,13 +3892,6 @@ make: *** [postgres] Error 1
 10 or later.

 
-   
-Native builds of psql don't support command
-line editing. The Cygwin build does support
-command line editing, so it should be used where psql is needed for
-interactive use on Windows.
-   
-

 PostgreSQL can be built using the Visual C++ compiler suite from Microsoft.
 These compilers can be either from Visual Studio,
@@ -3957,14 +3950,14 @@ make: *** [postgres] Error 1
 
  
   
-   ActiveState Perl
+   Perl

-ActiveState Perl is required to run the build generation scripts. MinGW
-or Cygwin Perl will not work. It must also be present in the PATH.
-Binaries can be downloaded from
-https://www.activestate.com;>
-(Note: version 5.14 or later is required,
-the free Standard Distribution is sufficient).
+ Perl is required to run the build generation scripts.
+ Perl from  https://packages.msys2.org/base/perl;> or
+ https://community.chocolatey.org/packages/StrawberryPerl;>
+  Strawberry Perl from the package manager
+ https://community.chocolatey.org/;>Chocolatey
+ are good alternatives. Perl must be present in the PATH.

   
 
@@ -3981,8 +3974,7 @@ make: *** [postgres] Error 1

 Both Bison and Flex
 are included in the msys tool suite, available
-from http://www.mingw.org/wiki/MSYS;> as part of the
-MinGW compiler suite.
+from https://www.msys2.org/;>.

 

@@ -4016,10 +4008,11 @@ make: *** [postgres] Error 1
 
  
   
-   ActiveState Tcl
+   Tcl

 Required for building PL/Tcl (Note: version
-8.4 is required, the free Standard Distribution is sufficient).
+8.4 is required, the free Standard Distribution is sufficient). One
+option is from https://packages.msys2.org/base/tcl;>.

   
 


signature.asc
Description: PGP signature


Re: Why do parallel scans require syncscans (but not really)?

2023-12-30 Thread Zhang Mingli
Hi,


Zhang Mingli
www.hashdata.xyz

Hi, Tomas
On Dec 31, 2023 at 07:10 +0800, Tomas Vondra , 
wrote:
> Sadly, there's no explanation why parallel scans do not allow disabling
> sync scans just like serial scans - and it's not quite obvious to me.

Feel confused too.

```
Assert(!IsBootstrapProcessingMode());
    Assert(allow_sync);
    snapshot = scan->rs_snapshot;
```

I dig this for a while and found that there a close relationship between field 
phs_syncscan(For parallel scan) and the allow_sync flag.

In table_block_parallelscan_initialize() ParallelTableScanDescData.phs_syncscan 
is set.

/* compare phs_syncscan initialization to similar logic in initscan */
bpscan->base.phs_syncscan = synchronize_seqscans &&
!RelationUsesLocalBuffers(rel) &&
bpscan->phs_nblocks > NBuffers / 4;

And the allow_sync is always set to true in initscan(), when phs_syncscan is 
not NULL.

if (scan->rs_base.rs_parallel != NULL)
{
/* For parallel scan, believe whatever ParallelTableScanDesc 
says. */
if (scan->rs_base.rs_parallel->phs_syncscan)
scan->rs_base.rs_flags |= SO_ALLOW_SYNC;
else
scan->rs_base.rs_flags &= ~SO_ALLOW_SYNC;
}

And phs_syncscan is used in 
table_block_parallelscan_startblock_init(),table_block_parallelscan_nextpage() 
to do sth of syncscan.

Back to the Assertion, else branch means param scan(parallel scan desc) is not 
null and we are in parallel scan mode.
else
{
/*
 * Parallel index build.
 *
 * Parallel case never registers/unregisters own snapshot.  
Snapshot
 * is taken from parallel heap scan, and is SnapshotAny or an 
MVCC
 * snapshot, based on same criteria as serial case.
 */
Assert(!IsBootstrapProcessingMode());
Assert(allow_sync);
snapshot = scan->rs_snapshot;
}

Agree with you that: why all parallel plans should / must be synchronized?
Parallel scan should have a choice about syncscan.
Besides that I think there is a risk Assert(allow_sync), at least should use 
phs_syncscan field  here to judge if allow_sync is true according to above.
So I guess, there should be an Assertion failure  of Assert(allow_sync) in 
theory when we use a parallel scan but phs_syncscan is false.

/* compare phs_syncscan initialization to similar logic in initscan */
bpscan->base.phs_syncscan = synchronize_seqscans &&
!RelationUsesLocalBuffers(rel) &&
bpscan->phs_nblocks > NBuffers / 4;

However, I didn’t produce it because phs_syncscan is set according to data 
size, even with some parallel cost GUCs set to 0.
And if there is not enough data, we usually do not choose a parallel plan,
like this case: build a index with parallel scan on underlying tables.





Re: pg_class.reltuples of brin indexes

2023-12-30 Thread Michael Paquier
On Sun, Dec 31, 2023 at 02:08:41AM +0100, Tomas Vondra wrote:
> I'm not sure. I think the current behavior is (still) wrong - I just
> rediscovered it during testing BRIN. I haven't checked, but I guess GIN
> is still affected too.
> 
> What's not clear to me is if this is merely cosmetic issue (making
> pg_class data confusing for people), or if it has some practical impact.
> And I'm not sure there's a good way to improve this, except for some
> basic guesswork. For BRIN I can imagine simply calculating the number of
> page ranges (relpages / pages_per_range), but no idea about GIN.

FWIW, this area of the code rings a few bells:
https://www.postgresql.org/message-id/17787-b2dbe62bdfabd...@postgresql.org
https://www.postgresql.org/message-id/17205-42b1d8f131f0c...@postgresql.org
--
Michael


signature.asc
Description: PGP signature


Build versionless .so for Android

2023-12-30 Thread Matthias Kuhn
Hi,

In order to ship libpq on Android, it needs to be built without a version
suffix on the .so file.
The attached patch will make sure that no version is added when built for
Android.
I was wondering if there are a) any comments on the approach and if I
should be handed in for a commitfest (currently waiting for the cooldown
period after account activation, I am not sure how long that is)

Thank you for any feedback
Matthias
diff --git a/src/Makefile.shlib b/src/Makefile.shlib
index 92d893e..4801b7a 100644
--- a/src/Makefile.shlib
+++ b/src/Makefile.shlib
@@ -187,7 +187,13 @@ endif
 ifeq ($(PORTNAME), linux)
   LINK.shared		= $(COMPILER) -shared
   ifdef soname
-LINK.shared		+= -Wl,-soname,$(soname)
+ifeq (linux-android,$(findstring linux-android,$(host_os)))
+#crosscompiling for android needs to create unvesioned libs
+shlib		= lib$(NAME)$(DLSUFFIX)
+LINK.shared		+= -Wl,-soname,lib$(NAME)$(DLSUFFIX)
+else
+LINK.shared		+= -Wl,-soname,$(soname)
+endif
   endif
   BUILD.exports		= ( echo '{ global:'; $(AWK) '/^[^\#]/ {printf "%s;\n",$$1}' $<; echo ' local: *; };' ) >$@
   exports_file		= $(SHLIB_EXPORTS:%.txt=%.list)


Re: Remove unneeded PGDATABASE setting from TAP tests

2023-12-30 Thread Michael Paquier
On Sun, Dec 31, 2023 at 07:24:08AM +0530, Bharath Rupireddy wrote:
> Some of the TAP tests are explicitly setting PGDATABASE environment
> variable to 'postgres', which isn't needed because the TAP test's perl
> library PostgreSQL::Test::Cluster already sets it once and for all.
> I've attached a patch that removes all such unneeded PGDATABASE
> settings.
> 
> Thoughts?

Hmm.  I don't see any reason to not do that as these are not really
self-documenting either.  How about 011_clusterdb_all.pl for
PGDATABASE?
--
Michael


signature.asc
Description: PGP signature


Re: Exclude generated wait_event files from pgindent

2023-12-30 Thread Michael Paquier
On Sun, Dec 31, 2023 at 07:24:02AM +0530, Bharath Rupireddy wrote:
> I think we need to add pgstat_wait_event.c, wait_event_types.h and
> wait_event_funcs_data.c to pgindent/exclude_file_patterns, otherwise,
> pgindent on the entire source tree after the source code compilation
> shows up these generated files. I've attached a patch to fix it.
> 
> Thoughts?

Indeed, will fix.
--
Michael


signature.asc
Description: PGP signature


Remove unneeded PGDATABASE setting from TAP tests

2023-12-30 Thread Bharath Rupireddy
Hi,

Some of the TAP tests are explicitly setting PGDATABASE environment
variable to 'postgres', which isn't needed because the TAP test's perl
library PostgreSQL::Test::Cluster already sets it once and for all.
I've attached a patch that removes all such unneeded PGDATABASE
settings.

Thoughts?

--
Bharath Rupireddy
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
From d428cc8a9ded13e22fea8a472b6a218ae77184cc Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy 
Date: Thu, 28 Dec 2023 11:48:37 +
Subject: [PATCH v1] Remove unneeded PGDATABASE setting from TAP tests

Some of the TAP tests are explicitly setting PGDATABASE
environment variable to 'postgres', which isn't needed because the
TAP test's perl library PostgreSQL::Test::Cluster already sets it
once and for all. This commit removes all such unneeded PGDATABASE
settings.
---
 src/test/recovery/t/004_timeline_switch.pl | 2 --
 src/test/recovery/t/019_replslot_limit.pl  | 2 --
 2 files changed, 4 deletions(-)

diff --git a/src/test/recovery/t/004_timeline_switch.pl b/src/test/recovery/t/004_timeline_switch.pl
index edaef91845..5a3dac972f 100644
--- a/src/test/recovery/t/004_timeline_switch.pl
+++ b/src/test/recovery/t/004_timeline_switch.pl
@@ -8,8 +8,6 @@ use PostgreSQL::Test::Cluster;
 use PostgreSQL::Test::Utils;
 use Test::More;
 
-$ENV{PGDATABASE} = 'postgres';
-
 # Ensure that a cascading standby is able to follow a newly-promoted standby
 # on a new timeline.
 
diff --git a/src/test/recovery/t/019_replslot_limit.pl b/src/test/recovery/t/019_replslot_limit.pl
index 6c244a5550..29c0390697 100644
--- a/src/test/recovery/t/019_replslot_limit.pl
+++ b/src/test/recovery/t/019_replslot_limit.pl
@@ -12,8 +12,6 @@ use PostgreSQL::Test::Cluster;
 use Test::More;
 use Time::HiRes qw(usleep);
 
-$ENV{PGDATABASE} = 'postgres';
-
 # Initialize primary node, setting wal-segsize to 1MB
 my $node_primary = PostgreSQL::Test::Cluster->new('primary');
 $node_primary->init(allows_streaming => 1, extra => ['--wal-segsize=1']);
-- 
2.34.1



Exclude generated wait_event files from pgindent

2023-12-30 Thread Bharath Rupireddy
Hi,

I think we need to add pgstat_wait_event.c, wait_event_types.h and
wait_event_funcs_data.c to pgindent/exclude_file_patterns, otherwise,
pgindent on the entire source tree after the source code compilation
shows up these generated files. I've attached a patch to fix it.

Thoughts?

--
Bharath Rupireddy
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
From b15de73799692fb617e17a2ce5956b5ebca86cb3 Mon Sep 17 00:00:00 2001
From: Bharath Rupireddy 
Date: Thu, 28 Dec 2023 11:34:55 +
Subject: [PATCH v1] Exclude generated wait_event files from pgindent

---
 src/tools/pgindent/exclude_file_patterns | 4 
 1 file changed, 4 insertions(+)

diff --git a/src/tools/pgindent/exclude_file_patterns b/src/tools/pgindent/exclude_file_patterns
index 6405a00511..a311507249 100644
--- a/src/tools/pgindent/exclude_file_patterns
+++ b/src/tools/pgindent/exclude_file_patterns
@@ -46,6 +46,10 @@ src/include/pg_config\.h$
 src/pl/plperl/ppport\.h$
 src/pl/plperl/SPI\.c$
 src/pl/plperl/Util\.c$
+src/backend/nodes/nodetags\.h$
+src/backend/utils/activity/pgstat_wait_event\.c$
+src/backend/utils/activity/wait_event_types\.h$
+src/backend/utils/activity/wait_event_funcs_data\.c$
 #
 # pg_bsd_indent has its own, idiosyncratic indentation style.
 # We'll stick to that to permit comparison with the FreeBSD upstream.
-- 
2.34.1



Re: pg_class.reltuples of brin indexes

2023-12-30 Thread Tomas Vondra
On 11/21/23 21:48, Bruce Momjian wrote:
> On Tue, Mar 27, 2018 at 08:58:11PM +0900, Masahiko Sawada wrote:
>> ...
>>
>> If I understand correctly pg_class.reltuples of indexes should have
>> the number of index tuples but especially for brin indexes it would be
>> hard to estimate it in the analyze code. I thought that we can change
>> brinvacuumcleanup so that it returns the estimated number of index
>> tuples and do vac_update_relstats using that value but it would break
>> API contract. Better ideas?
> 
> I assume there is nothing to do on this issue.
> 

I'm not sure. I think the current behavior is (still) wrong - I just
rediscovered it during testing BRIN. I haven't checked, but I guess GIN
is still affected too.

What's not clear to me is if this is merely cosmetic issue (making
pg_class data confusing for people), or if it has some practical impact.
And I'm not sure there's a good way to improve this, except for some
basic guesswork. For BRIN I can imagine simply calculating the number of
page ranges (relpages / pages_per_range), but no idea about GIN.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Show WAL write and fsync stats in pg_stat_io

2023-12-30 Thread Michael Paquier
On Tue, Dec 26, 2023 at 03:35:52PM +0300, Nazir Bilal Yavuz wrote:
> On Tue, 26 Dec 2023 at 13:10, Michael Paquier  wrote:
>> I am not sure while the whole point of the exercise is to have all the
>> I/O related data in a single view.  Something that I've also found a
>> bit disturbing yesterday while looking at your patch is the fact that
>> the operation size is guessed from the context and object type when
>> querying the view because now everything is tied to BLCKSZ.  This
>> patch extends it with two more operation sizes, and there are even
>> cases where it may be a variable.  Could it be a better option to
>> extend pgstat_count_io_op_time() so as callers can themselves give the
>> size of the operation?
> 
> Do you mean removing the op_bytes column and tracking the number of
> bytes in reads, writes, and extends? If so, that makes sense to me but
> I don't want to remove the number of operations; I believe that has a
> value too. We can extend the pgstat_count_io_op_time() so it can both
> track the number of bytes and the number of operations.

Apologies if my previous wording sounded confusing.  The idea I had in
mind was to keep op_bytes in pg_stat_io, and extend it so as a value
of NULL (or 0, or -1) is a synonym as "writes", "extends" and "reads"
as a number of bytes.

> Also, it is not directly related to this patch but vectored IO [1] is
> coming soon; so the number of operations could be wrong since vectored
> IO could merge a couple of operations.

Hmm.  I have not checked this patch series so I cannot say for sure,
but we'd likely just want to track the number of bytes if a single
operation has a non-equal size rather than registering in pg_stat_io N
rows with different op_bytes, no?  I am looping in Thomas Munro in CC
for comments.

>> The whole patch is kind of itself complicated enough, so I'd be OK to
>> discard the case of the WAL receiver for now.  Now, if we do so, the
>> code stack of pgstat_io.c should handle WAL receivers as something
>> entirely disabled until all the known issues are solved.  There is
>> still a lot of value in tracking WAL data associated to the WAL
>> writer, normal backends and WAL senders.
> 
> Why can't we add comments and leave it as it is? Is it because this
> could cause misunderstandings?
> 
> If we want to entirely disable it, we can add
> 
> if (MyBackendType == B_WAL_RECEIVER && io_object == IOOBJECT_WAL)
> return;
> 
> to the top of the pgstat_count_io_op_time() since all IOOBJECT_WAL
> calls are done by this function, then we can disable it at
> pgstat_tracks_io_bktype().

Yeah, a limitation like that may be acceptable for now.  Tracking the
WAL writer and WAL sender activities can be relevant in a lot of cases
even if we don't have the full picture for the WAL receiver yet.
--
Michael


signature.asc
Description: PGP signature


Re: Commitfest 2024-01 starting in 3 days!

2023-12-30 Thread Michael Paquier
On Fri, Dec 29, 2023 at 02:41:55PM +0530, vignesh C wrote:
> Commitfest 2024-01 is starting in 3 days!
> Please register the patches which have not yet registered. Also if
> someone has some pending patch that is not yet submitted, please
> submit and register for 2024-01 Commitfest.
> I will be having a look at the commitfest entries, correcting the
> status if any of them need to be corrected and update the authors.
> Kindly keep the patch updated so that the reviewers/committers can
> review the patches and get it committed.

Please be careful that it should be possible to register patches until
the 31st of December 23:59 AoE, as of:
https://www.timeanddate.com/time/zones/aoe

The status of the CF should be switched after this time, not before.
--
Michael


signature.asc
Description: PGP signature


Re: Next step towards 64bit XIDs: Switch to FullTransactionId for PGPROC->xid and XLogRecord->xl_xid

2023-12-30 Thread Michael Paquier
On Fri, Dec 29, 2023 at 02:36:19PM +0100, Matthias van de Meent wrote:
> I don't think this is an actionable change, as this wastes 4 more bytes (or
> 8 with alignment) in nearly all WAL records that don't use the
> HEAP/HEAP2/XLOG rmgrs, which would then be up to 10 (if not 14, when
> 64but-aligned) bytes per record. Unless something like [0] gets committed
> this will add a significant write overhead to all operations, even if they
> are not doing anything that needs an XID.

I was eyeing at the patches before reading your comment, and saw this
across the two patches:

@@ -176,7 +176,7 @@ struct PGPROC
Latch   procLatch;  /* generic latch for process */
 
 
-   TransactionId xid;  /* id of top-level transaction 
currently being
+   FullTransactionId xid;  /* id of top-level transaction 
currently being
[...]
 typedef struct XLogRecord
 {
uint32  xl_tot_len; /* total len of entire record */
-   TransactionId xl_xid;   /* xact id */
+   pg_crc32c   xl_crc; /* CRC for this record */
+   FullTransactionId xl_xid;   /* xact id */

And FWIW, echoing with Matthias, making these generic structures
arbitrary larger is a non-starter.  We should try to make them
shorter.
--
Michael


signature.asc
Description: PGP signature


Re: Add PQsendSyncMessage() to libpq

2023-12-30 Thread Michael Paquier
On Fri, Dec 29, 2023 at 12:52:30PM +0100, Jelte Fennema-Nio wrote:
> On Mon, 13 Nov 2023 at 09:20, Anthonin Bonnefoy
>  wrote:
> > \syncpipeline
> > tps = 2607.587877 (without initial connection time)
> > ...
> > \endpipeline
> > \startpipeline
> > tps = 2290.527462 (without initial connection time)
> 
> Those are some nice improvements. And I think once this patch is in,
> it would make sense to add the pgbench feature you're suggesting.
> Because indeed that makes it see what perf improvements can be gained
> for your workload.

Yeah, that sounds like a good idea seen from here.  (Still need to
look at the core patch.)
--
Michael


signature.asc
Description: PGP signature


Re: pg_stat_statements: more test coverage

2023-12-30 Thread Michael Paquier
On Sat, Dec 30, 2023 at 08:39:47PM +0100, Peter Eisentraut wrote:
> On 29.12.23 06:14, Julien Rouhaud wrote:
>> I agree with Michael on this one, the only times I saw this pattern
>> was to comply with some company internal policy for minimal coverage
>> numbers.
> 
> Ok, skipped that.

Just to close the loop here.  I thought that I had sent a patch on the
lists that made use of these markers, but it looks like that's not the
case.  The only thread I've found is this one:
https://www.postgresql.org/message-id/d8f6bdd536df403b9b33816e9f7e0b9d@G08CNEXMBPEKD05.g08.fujitsu.local

(FWIW, I'm still skeptic about the idea of painting more backend code
with these outside the parsing areas, but I'm OK to be outnumbered.)
--
Michael


signature.asc
Description: PGP signature


Re: Fix copy and paste error (src/bin/pg_basebackup/pg_basebackup.c)

2023-12-30 Thread Michael Paquier
On Sat, Dec 30, 2023 at 01:38:00PM +0100, Jelte Fennema-Nio wrote:
> On Sat, 30 Dec 2023 at 12:44, Ranier Vilela  wrote:
> > The function StartLogStreamer in (src/bin/pg_basebackup/pg_basebackup.c)
> > Has a copy and paste error.
> 
> Nice catch. Patch looks good to me.

Please see my reply here:
https://www.postgresql.org/message-id/zzczazg7y0ld9...@paquier.xyz

So these will likely be handled in a single batch.
--
Michael


signature.asc
Description: PGP signature


Re: Fix resource leak (src/bin/pg_combinebackup/pg_combinebackup.c)

2023-12-30 Thread Michael Paquier
Hi Ranier,

On Sat, Dec 30, 2023 at 10:34:12AM -0300, Ranier Vilela wrote:
> The function scan_for_existing_tablespaces in
> (src/bin/pg_combinebackup/pg_combinebackup.c)
> Has a resource leak.
> 
> The function opendir opendir
> 
> Must be freed with closedir
> 
> 
> The commit affected is dc21234
> 

The community receives its own coverity reports.  These are not public
but we are aware of the reports related to pg_basebackup and
pg_combinebackup as an effect of dc212340058b.  Robert is planning to
handle all these AFAIK once the new year vacations and such cool down.

In short there is no need to worry here :)

Thanks for the patches.
--
Michael


signature.asc
Description: PGP signature


Why do parallel scans require syncscans (but not really)?

2023-12-30 Thread Tomas Vondra
Hi,

While investigating something, I noticed it's not really possible to
disable syncscans for a particular parallel scan.

That is, with serial scans, we can do this:

table_index_build_scan(heap, index, indexInfo, false, true,
   callback, state, NULL);

where false means "allow_sync=false", i.e. disabling synchronized scans.

However, if this is tied to a parallel scan, that's not possible. If you
try doing this:

table_index_build_scan(heap, index, indexInfo, false, true,
   callback, state, parallel_scan);

it hits this code in heapam_index_build_range_scan()

  /*
   * Parallel index build.
   *
   * Parallel case never registers/unregisters own snapshot.  Snapshot
   * is taken from parallel heap scan, and is SnapshotAny or an MVCC
   * snapshot, based on same criteria as serial case.
   */
  Assert(!IsBootstrapProcessingMode());
  Assert(allow_sync);
  snapshot = scan->rs_snapshot;

Sadly, there's no explanation why parallel scans do not allow disabling
sync scans just like serial scans - and it's not quite obvious to me.

Just removing the assert is not sufficient to make this work, though.
Because just a little later we call heap_setscanlimits() which does:

  Assert(!scan->rs_inited); /* else too late to change */
  /* else rs_startblock is significant */
  Assert(!(scan->rs_base.rs_flags & SO_ALLOW_SYNC));

I don't quite understand what the comments for the asserts say, but
SO_ALLOW_SYNC seems to come from table_beginscan_parallel.

Perhaps the goal is to prevent mismatch between the parallel and serial
parts of the scans - OK, that probably makes sense. But I still don't
understand why table_beginscan_parallel forces the SO_ALLOW_SYNC instead
of having an allow_sync parameters too.

Is there a reason why all parallel plans should / must be synchronized?

Well, in fact they are not *required* because if I set

  synchronize_seqscans=off

this makes the initscan() to remove the SO_ALLOW_SYNC ...


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Fix Brin Private Spool Initialization (src/backend/access/brin/brin.c)

2023-12-30 Thread Ranier Vilela
Em sáb., 30 de dez. de 2023 19:19, Tomas Vondra <
tomas.von...@enterprisedb.com> escreveu:

> On 12/29/23 18:02, Ranier Vilela wrote:
> >
> > ...
> >
> > As I wrote, the new patch version was for consideration.
> > It seems more like a question of style, so it's better to remove it.
> >
> > Anyway +1 for your original patch.
> >
>
> I've pushed my original patch. Thanks for the report.
>
Thank you.

Best regards,
Ranier Vilela


Re: Revise the Asserts added to bimapset manipulation functions

2023-12-30 Thread David Rowley
On Fri, 29 Dec 2023 at 23:38, Richard Guo  wrote:
> After applying this process to the first RestrictInfo, the bitmapset now
> becomes {t2, t3}.  Consequently, the second RestrictInfo also perceives
> {t2, t3} as its required_relids.  As a result, when attempting to remove
> it from the joininfo lists, a problem arises, because it is not in t2's
> joininfo list.

Changing the relids so they reference t2 instead of t1 requires both
bms_add_member() and bms_del_member().  The bms_add_member() will
cause the set to be reallocated with my patch so I don't see why this
case isn't covered.

> Also, here are some review comments for the v2 patch:
>
> * There is no reallocation that happens in bms_add_members(), do we need
> to call bms_copy_and_free() there?

The spec I put in the comment at the top of bitmapset.c says:

> have the code *always* reallocate the bitmapset when the
> * set *could* be reallocated as a result of the modification

Looking at bms_add_members(), it seems to me that the set *could* be
reallocated as a result of the modification, per:

if (a->nwords < b->nwords)
{
result = bms_copy(b);
other = a;
}

In my view, that meets the spec I outlined.

> * Do you think we can add Assert(bms_is_valid_set(a)) for bms_free()?

I did briefly have the Assert in bms_free(), but I removed it as I
didn't think it was that useful to validate the set before freeing it.
Certainly, there'd be an argument to do that, but I ended up not
putting it there. I wondered if there could be a case where we do
something like bms_int_members() which results in an empty set and
after checking for and finding the set is now empty, we call
bms_free().  If we did that then we'd Assert fail.  In reality, we use
pfree() instead of bms_free() as we already know the set is not NULL,
so it wouldn't cause a problem for that particular case. I wondered if
there might be another one that I didn't spot, so felt it was best not
to Assert there.

I imagine that if we found some case where the bms_free() Assert
failed, we'd likely just remove it rather than trying to make the set
valid before freeing it.  So it seems pretty pointless if we'd opt to
remove the Assert() at the first sign of trouble.

David




Re: Parallel CREATE INDEX for BRIN indexes

2023-12-30 Thread Tomas Vondra
Hi,

While cleaning up some unnecessary bits of the code and slightly
inaccurate comments, I ran into a failure when the parallel scan (used
by the parallel build) happened to be a synchronized scan. When the scan
did not start on page 0, the parallel callback failed to correctly
handle tuples after wrapping around to the start of the table.

AFAICS the extensive testing I did during development did not detect
this because strictly speaking the index was "correct" (as in not
returning incorrect results in queries), just less efficient (missing
some ranges, and some ranges being "wider" than necessary). Or perhaps
the tests happened to not trigger synchronized scans.

Should be fixed by 1ccab5038eaf261f. It took me ages to realize what the
problem is, and I initially suspected there's some missing coordination
between the workers/leader, or something.

So I started comparing the code to btree, which is where it originated,
and I realized there's indeed one difference - the BRIN code only does
half the work with the workersdonecv variable. The workers do correctly
update the count and notify the leader, but the leader never waits for
the count to be 0. That is, there's nothing like _bt_parallel_heapscan.

I wonder whether this actually is a problem, considering the differences
between the flow in BRIN and BTREE. In particular, the "leader" does the
work in _brin_end_parallel() after WaitForParallelWorkersToFinish(). So
it's not like there might be a worker still processing data, I think.

But now that I think about it, maybe it's not such a great idea to do
this kind of work in _brin_end_parallel(). Maybe it should do just stuff
related to termination of workers etc. and the merging of results should
happen elsewhere - earlier in brinbuild()? Then it'd make sense to have
something like _bt_parallel_heapscan ...


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Fix Brin Private Spool Initialization (src/backend/access/brin/brin.c)

2023-12-30 Thread Tomas Vondra
On 12/29/23 18:02, Ranier Vilela wrote:
>
> ...
> 
> As I wrote, the new patch version was for consideration.
> It seems more like a question of style, so it's better to remove it.
> 
> Anyway +1 for your original patch.
> 

I've pushed my original patch. Thanks for the report.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: SET ROLE x NO RESET

2023-12-30 Thread Michał Kłeczek

> On 30 Dec 2023, at 17:16, Eric Hanson  wrote:
> 
> What do you think of adding a NO RESET option to the SET ROLE command?

What I proposed some time ago is SET ROLE … GUARDED BY ‘password’, so that you 
could later: RESET ROLE WITH ‘password'

https://www.postgresql.org/message-id/F9428C6E-4CCC-441D-A148-67BF36526D45%40kleczek.org

—
MIchal

Re: Windows sockets (select missing events?)

2023-12-30 Thread Thomas Munro
Hi Ranier,

I doubt it really matters, unless perhaps it's upsetting some static
analysis tool?  It's bounded by FD_SETSIZE, a small number.

FWIW, I would like to delete pgwin32_select() in PG17.  Before PG16
(commit 7389aad6), the postmaster used it to receive incoming
connections, but that was replaced with latch.c code.  I think the
main other place that uses select() in the backend is in the RADIUS
authentication code.  That's arguably a bug, and should be replaced
with WaitEventSet too.  I wrote a patch like that, but haven't got
back to it yet:

https://www.postgresql.org/message-id/flat/CA%2BhUKGKxNoVjkMCksnj6z3BwiS3y2v6LN6z7_CisLK%2Brv%2B0V4g%40mail.gmail.com

If we could get that done, hopefully pgwin32_select() could go?  Are
there any more callers hiding somewhere?

I'd also like to remove various other socket API wrapper functions in
that file.  They exist to support blocking mode (our Windows sockets
are always in non-blocking mode at the OS level, but we simulate
non-blocking with our own user space wait loops in those wrappers),
and we also have some kludges to support connectionless datagrams
(formerly used for talking to the defunct stats backend, no longer
needed AFAICS).  But when I started looking into all that, I realised
that I'd first need to work on removing a few more straggler places
that use blocking mode for short stretches in the backend.  That is,
make it 100% true that sockets in the backend are always in
non-blocking mode.  I think that seems like a good goal, because any
place still using blocking mode is a place where a backend might hang
uninterruptibly (on any OS).

A closely related problem is fixing the socket event-loss problems we
have on Windows https://commitfest.postgresql.org/46/3523/, which is
moving at a glacial pace because I am not a Windows guy.  Eventually I
want to use long-lived WaitEventSet in lots of places for better
efficiency on all systems, and that seems to require fixing that
historical bug in our per-socket event tracking.  I think the end
point of all that is: we'd lose the wrappers that deal with
fake-blocking-mode and fake-signals, and lose the right to use
blocking signals at all in the backend, but keep just a few short
win32 wrappers that help track events reliably.




Re: Experiments with Postgres and SSL

2023-12-30 Thread Heikki Linnakangas

On 05/07/2023 02:33, Michael Paquier wrote:

On Tue, Jul 04, 2023 at 05:15:49PM +0300, Heikki Linnakangas wrote:

I don't see the point of the libpq 'sslalpn' option either. Let's send ALPN
always.

Admittedly having the options make testing different of combinations of old
and new clients and servers a little easier. But I don't think we should add
options for the sake of backwards compatibility tests.


Hmm.  I would actually argue in favor of having these with tests in
core to stress the previous SSL hanshake protocol, as not having these
parameters would mean that we rely only on major version upgrades in
the buildfarm to test the backward-compatible code path, making issues
much harder to catch.  And we still need to maintain the
backward-compatible path for 10 years based on what pg_dump and
pg_upgrade need to support.


Ok, let's keep it.

I started to review this again. There's a lot of little things to fix 
before this is ready for commit, but overall this looks pretty good. A 
few notes / questions on the first two patches (in addition to the few 
comments I made earlier):


If the client sends TLS HelloClient directly, but the server does not 
support TLS, it just closes the connection. It would be nice to still 
send some kind of an error to the client. Maybe a TLS alert packet? I 
don't want to start implementing TLS, but I think a TLS alert packet 
with a suitable error code would be just a constant.


The new CONNECTION_DIRECT_SSL_STARTUP state needs to be moved to end of 
the enum. We cannot change the integer values of existing of enum 
values, or clients compiled with old libpq version would mix up the states.



/*
 * validate sslnegotiation option, default is "postgres" for the 
postgres
 * style negotiated connection with an extra round trip but more 
options.
 */


What "more options" does the negotiated connection provide?


if (conn->sslnegotiation)
{
if (strcmp(conn->sslnegotiation, "postgres") != 0
&& strcmp(conn->sslnegotiation, "direct") != 0
&& strcmp(conn->sslnegotiation, "requiredirect") != 0)
{
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "invalid %s value: 
\"%s\"",

"sslnegotiation", conn->sslnegotiation);
return false;
}

#ifndef USE_SSL
if (conn->sslnegotiation[0] != 'p') {
conn->status = CONNECTION_BAD;
libpq_append_conn_error(conn, "sslnegotiation value \"%s\" 
invalid when SSL support is not compiled in",

conn->sslnegotiation);
return false;
}
#endif
}


At the same time, the patch allows the combination of "sslmode=disable" 
and "sslnegotiation=requiredirect". Seems inconsistent to error out if 
compiled without SSL support.



else
{
libpq_append_conn_error(conn, "sslnegotiation missing?");
return false;
}


In the other similar settings, like 'channel_binding' and 'sslcertmode', 
we strdup() the compiled-in default if the option is NULL. I'm not sure 
if that's necessary, I think the compiled-in defaults should get filled 
in conninfo_add_defaults(). If so, then those other places could be 
turned into errors like this too. This seems to be a bit of a mess even 
before this patch.


In pg_conn struct:


+   boolallow_direct_ssl_try; /* Try to make a direct SSL 
connection
+  * without an 
"SSL negotiation packet" */
boolallow_ssl_try;  /* Allowed to try SSL negotiation */
boolwait_ssl_try;   /* Delay SSL negotiation until after
 * attempting 
normal connection */


It's getting hard to follow what combinations of these booleans are 
valid and what they're set to at different stages. I think it's time to 
turn all these into one enum, or something like that.


I intend to continue reviewing this after Jan 8th. I'd still like to get 
this into v17.


--
Heikki Linnakangas
Neon (https://neon.tech)





Re: pg_stat_statements: more test coverage

2023-12-30 Thread Peter Eisentraut

On 29.12.23 06:14, Julien Rouhaud wrote:


It looks good to me.  One minor complaint, I'm a bit dubious about
those queries:

SELECT count(*) <= 100 AND count(*) > 0 FROM pg_stat_statements;

Is it to actually test that pg_stat_statements won't store more than
pg_stat_statements.max records?  Note also that this query can't
return 0 rows, as the currently executed query will have an entry
added during post_parse_analyze.  Maybe a comment saying what this is
actually testing would help.


Yeah, I think I added that query before I added the queries to check the 
contents of pg_stat_statements.query itself, so it's a bit obsolete.  I 
reworked that part.



It would also be good to test that pg_stat_statements_info.dealloc is
more than 0 once enough statements have been issued.


I added that.




I have committed the patches 0002 and 0003 from the previous patch set
already.

I have also enhanced the TAP test a bit to check the actual content of
the output across restarts.


Nothing much to say about this one, it all looks good.


Ok, I have committed these two patches.


I'm not too hung up on the 0001 patch if others don't like that approach.


I agree with Michael on this one, the only times I saw this pattern
was to comply with some company internal policy for minimal coverage
numbers.


Ok, skipped that.




Re: Add new protocol message to change GUCs for usage with future protocol-only GUCs

2023-12-30 Thread Jelte Fennema-Nio
On Sat, 30 Dec 2023 at 16:05, Jacob Burroughs
 wrote:
> What if we allowed the client to send `ParameterStatus` messages to
> the server to reconfigure protocol extensions that were requested at
> startup?

Sending ParameterStatus from the frontend is not possible, because
Sync already claimed the 'S' message type on the frontend side. (and
even if that weren't the case I think the name ParameterStatus is not
descriptive of what the message would do when you'd send it from the
frontend.

> Such processing would be independent of the transaction
> lifecycle since protocol-level options aren't related to transactions.
> Any errors in the set value would be handled with an `ErrorResponse`
> (including if an extension was not reconfigurable after connection
> startup), and success with a new `ReadyForQuery` message.

If we only consider modifying protocol extension parameters with
ParameterSet, then I think I like the idea of reusing ReadyForQuery
instead of introducing ParamaterSetComplete. I like that it implicitly
makes clear that you should not send ParameterStatus while you already
have an open pipeline for protocol extension parameters. If we go this
approach, we should probably explicitly disallow sending ParameterSet
while a pipeline.

However, if we also allow using ParameterSet to change regular runtime
parameters, then I think this approach makes less sense. Because then
you might want to batch regular runtime parameter together with other
actions in a pipeline, and use the expected "ignore everything after
the first error"

> The actual
> effect of an extension change must be delayed until after the
> ReadyForQuery has been transmitted, though I don't know if that is
> feasible or if we would need to instead implicitly assume changes were
> successful and just close the connection on error.

I'm trying to think about how a client would want to handle a failure
when changing a protocol extension. Is there really an action it can
reasonably take at that point? I guess it might depend on the exact
extension, but I do think that in many cases closing the connection is
the only reasonable response. Maybe the server should even close the
connection with a FATAL error when it receives a ParameterSet for a
protocol extension but it fails to apply it.

> We wouldn't need
> to bump the protocol version since a client wouldn't (shouldn't) send
> these messages unless it successfully negotiated the relevant protocol
> extension(s) at startup.

I think I'd still prefer to bump the minor version, even if it's just
so that we've done it for the first time and we fixed all the libpq
issues with it. But I also think it makes sense from a versioning
perspective, if there's new message types that can be sent by the
client, which do not correspond to a protocol extension, then I think
the only reasonable thing is to update the version number. Otherwise
you'd basically need to define the ParameterSet message to be a part
of every new protocol extension that you would add. That seems more
confusing than saying that version 3.1 supports this new ParameterSet
message type.




Re: SET ROLE x NO RESET

2023-12-30 Thread Joe Conway

On 12/30/23 11:16, Eric Hanson wrote:

Hi,

What do you think of adding a NO RESET option to the SET ROLE command?

Right now Postgres can enforce data security with roles and RLS, but 
role-per-end-user doesn't really scale:  Db connections are per-role, so 
a connection pooler can't share connections across users.  We can work 
around this with policies that use session variables and checks against 
current_user, but it seems like role-per end user would be more 
beautiful.  If SET ROLE had a NO RESET option, you could connect through 
a connection pooler as a privileged user, but downgrade to the user's 
role for the duration of the session.


+1

I agree this would be useful.

In the meantime, in case it helps, see

  https://github.com/pgaudit/set_user

Specifically set_session_auth(text):
-
When set_session_auth(text) is called, the effective session and current 
user is switched to the rolename supplied, irrevocably. Unlike 
set_user() or set_user_u(), it does not affect logging nor allowed 
statements. If set_user.exit_on_error is "on" (the default), and any 
error occurs during execution, a FATAL error is thrown and the backend 
session exits.

-

--
Joe Conway
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com





SET ROLE x NO RESET

2023-12-30 Thread Eric Hanson
Hi,

What do you think of adding a NO RESET option to the SET ROLE command?

Right now Postgres can enforce data security with roles and RLS, but
role-per-end-user doesn't really scale:  Db connections are per-role, so a
connection pooler can't share connections across users.  We can work around
this with policies that use session variables and checks against
current_user, but it seems like role-per end user would be more beautiful.
If SET ROLE had a NO RESET option, you could connect through a connection
pooler as a privileged user, but downgrade to the user's role for the
duration of the session.

Thanks,
Eric


Re: Add new protocol message to change GUCs for usage with future protocol-only GUCs

2023-12-30 Thread Jacob Burroughs
> How about instead of talking about protocol-only GUCs (which are
> indeed currently a theoretical concept), we start considering this
> patch as a way to modify parameters for protocol extensions. Protocol
> extension parameters (starting with _pq_.) can currently only be
> configured using the StartupMessage and afterwards there is no way to
> modify them.
>
> I believe [1], [2] and [3] from my initial email could all use such
> protocol extension parameters, if those parameters could be changed
> after the initial startup.

What if we allowed the client to send `ParameterStatus` messages to
the server to reconfigure protocol extensions that were requested at
startup?  Such processing would be independent of the transaction
lifecycle since protocol-level options aren't related to transactions.
Any errors in the set value would be handled with an `ErrorResponse`
(including if an extension was not reconfigurable after connection
startup), and success with a new `ReadyForQuery` message. The actual
effect of an extension change must be delayed until after the
ReadyForQuery has been transmitted, though I don't know if that is
feasible or if we would need to instead implicitly assume changes were
successful and just close the connection on error.  We wouldn't need
to bump the protocol version since a client wouldn't (shouldn't) send
these messages unless it successfully negotiated the relevant protocol
extension(s) at startup.




Re: Things I don't like about \du's "Attributes" column

2023-12-30 Thread Isaac Morland
On Sat, 30 Dec 2023 at 09:23, Pavel Luzanov 
wrote:


> I think that writing the value "infinity" in places where there is no
> value is
> not a good thing. This hides the real value of the column. In addition,
> there is no reason to set "infinity" when the password is always valid with
> default NULL.
>

Would it make sense to make the column non-nullable and always set it to
infinity when there is no expiry?

In this case, I think NULL simply means infinity, so why not write that? If
the timestamp type didn't have infinity, then NULL would be a natural way
of saying that there is no expiry, but with infinity as a possible value, I
don't see any reason to think of no expiry as being the absence of an
expiry time rather than an infinite expiry time.


Re: Things I don't like about \du's "Attributes" column

2023-12-30 Thread Pavel Luzanov

Now I'm ready for discussion.

On 23.06.2023 03:50, Tom Lane wrote:

Nearby I dissed psql's \du command for its incoherent "Attributes"
column [1].  It's too late to think about changing that for v16,
but here's some things I think we should consider for v17:

* It seems weird that some attributes are described in the negative
("Cannot login", "No inheritance").  I realize that this corresponds
to the defaults, so that a user created by CREATE USER with no options
shows nothing in the Attributes column; but I wonder how much that's
worth.  As far as "Cannot login" goes, you could argue that the silent
default ought to be for the properties assigned by CREATE ROLE, since
the table describes itself as "List of roles".  I'm not dead set on
changing this, but it seems like a topic that deserves a fresh look.


Agree. The negative form looks strange.

Fresh look suggests to move login attribute to own column.
The attribute separates users and group roles, this is very important 
information,

it deserves to be placed in a separate column. Of course, it can be
returned back to "Attributes" if such change is very radical.

On the other hand, rolinherit attribute lost its importance in v16.
I don't see serious reasons in changing the default value, so we can 
leave it

in the "Attributes" column. In most cases it will be hidden.


* I do not like the display of rolconnlimit, ie "No connections" or
"%d connection(s)".  A person not paying close attention might think
that that means the number of *current* connections the user has.
A minimal fix could be to word it like "No connections allowed" or
"%d connection(s) allowed".  But see below.


connlimit attribute moved from "Attributes" column to separate column
"Max connections" in extended mode. But without any modifications to 
it's values.

For me it looks normal.


* I do not like the display of rolvaliduntil, either.


Moved from "Attributes" column to separate column  "Password expire time"
in extended mode (+).


I suggest that maybe it'd
be okay to change the pg_roles view along the lines of

-   ''::text as rolpassword,
+   case when rolpassword is not null then ''::text end as 
rolpassword,


Done.
The same changes to pg_user.passwd for consistency.

Then we could fix \du to say nothing if rolpassword is null,
and when it isn't, print "Password valid until infinity" whenever
that is the case (ie, rolvaliduntil is null or infinity).


I think that writing the value "infinity" in places where there is no 
value is

not a good thing. This hides the real value of the column. In addition,
there is no reason to set "infinity" when the password is always valid with
default NULL.

My suggestion to add new column "Has password?" in extended mode with
yes/no values and leave rolvaliduntil values as is.


* On a purely presentation level, how did we possibly arrive
at the idea that the connection-limit and valid-until properties
deserve their own lines in the Attributes column while the other
properties are comma-separated?  That makes no sense whatsoever,
nor does it look nice in \x display format.
(b) move these two things into separate columns.


Implemented this approach.

In a result describeRoles function significantly simplified and 
rewritten for the convenience
of printing the whole query result. All the magic of building 
"Attributes" column
moved to SELECT statement for easy viewing by users via ECHO_HIDDEN 
variable.


Here is an example output.

--DROP ROLE alice, bob, charlie, admin;

CREATE ROLE alice LOGIN SUPERUSER NOINHERIT PASSWORD 'alice' VALID UNTIL 
'infinity' CONNECTION LIMIT 5;
CREATE ROLE bob LOGIN REPLICATION BYPASSRLS CREATEDB VALID UNTIL '2022-01-01';
CREATE ROLE charlie LOGIN CREATEROLE PASSWORD 'charlie' CONNECTION LIMIT 0;
CREATE ROLE admin;

COMMENT ON ROLE alice IS 'Superuser but with connection limit and with no 
inheritance';
COMMENT ON ROLE bob IS 'No password but with expire time';
COMMENT ON ROLE charlie IS 'No connections allowed';
COMMENT ON ROLE admin IS 'Group role without login';


Master.
=# \du
 List of roles
 Role name | Attributes
---+
 admin | Cannot login
 alice | Superuser, No inheritance +
   | 5 connections +
   | Password valid until infinity
 bob   | Create DB, Replication, Bypass RLS+
   | Password valid until 2022-01-01 00:00:00+03
 charlie   | Create role   +
   | No connections
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS

=# \du+
List of roles
 Role name | Attributes |   
  Description

Re: Windows sockets (select missing events?)

2023-12-30 Thread Ranier Vilela
Em qui., 28 de dez. de 2023 às 14:45, Ranier Vilela 
escreveu:

> Hi,
>
> The type of field fd_count is defined in winsock.h:
> typedef unsigned intu_int;
>
> So in the struct fd_set,
> the field fd_count is unsigned int.
>
> The pgwin32_select  function has loops that use *int* as indices.
>
> Question: in Windows, the socket select function isn't missing some events?
>
> If not, It would be a good prevention practice, using the correct type,
> right?
>
> Patch attached.
>
Fix overflow with patch.

best regards,
Ranier Vilela


v1-001-fix-socket-select-events.patch
Description: Binary data


Fix resource leak (src/bin/pg_combinebackup/pg_combinebackup.c)

2023-12-30 Thread Ranier Vilela
Hi,

Per Coverity.

The function scan_for_existing_tablespaces in
(src/bin/pg_combinebackup/pg_combinebackup.c)
Has a resource leak.

The function opendir opendir

Must be freed with closedir


The commit affected is dc21234


Trivial patch attached.

Best regards,
Ranier Vilela


fix-resource-leak-pg_combinebackup.patch
Description: Binary data


Re: Fix copy and paste error (src/bin/pg_basebackup/pg_basebackup.c)

2023-12-30 Thread Jelte Fennema-Nio
On Sat, 30 Dec 2023 at 12:44, Ranier Vilela  wrote:
> The function StartLogStreamer in (src/bin/pg_basebackup/pg_basebackup.c)
> Has a copy and paste error.

Nice catch. Patch looks good to me.




Re: Streaming I/O, vectored I/O (WIP)

2023-12-30 Thread Cédric Villemain

Le 11/12/2023 à 10:12, Thomas Munro a écrit :

3.  smgrreadv/smgrwritev patch:

  * improved ENOSPC handling
  * improve description of EOF and ENOSPC handling
  * fixed the sizes reported in dtrace static probes
  * fixed some words in the docs about that
  * changed error messages to refer to "blocks %u..%u"

4.  smgrprefetch-with-nblocks patch has no change, hasn't drawn any
comments hopefully because it is uncontroversial.

I'm planning to commit these fairly soon.



Thanks, very useful additions.
Not sure what you have already done to come next...

I have 2 smalls patches here:
* to use range prefetch in pg_prewarm (smgrprefetch only at the moment, 
using smgrreadv to come next).
* to support nblocks=0 in smgrprefetch (posix_fadvise supports a len=0 
to apply flag from offset to end of file).


Should I add to commitfest ?
---
Cédric Villemain +33 (0)6 20 30 22 52
https://Data-Bene.io
PostgreSQL Expertise, Support, Training, R




Fix copy and paste error (src/bin/pg_basebackup/pg_basebackup.c)

2023-12-30 Thread Ranier Vilela
Hi,

Per Coverity.

The function StartLogStreamer in (src/bin/pg_basebackup/pg_basebackup.c)
Has a copy and paste error.

The commit affected is dc212340058b4e7ecfc5a7a81ec50e7a207bf288


Trivial patch attached.

Best regards,
Ranier Vilela


fix-copy-and-paste-error-pg_basebackup.patch
Description: Binary data


Re: backtrace_on_internal_error

2023-12-30 Thread Peter Eisentraut

On 19.12.23 17:29, Tom Lane wrote:

IMO, we aren't really going to get a massive payoff from this with
the current backtrace output; it's just not detailed enough.  It's
better than nothing certainly, but to really move the goalposts
we'd need something approaching gdb's "bt full" output.  I wonder
if it'd be sane to try to auto-invoke gdb.  That's just blue sky
for now, though.  In the meantime, I agree with the proposal as it
stands (that is, auto-backtrace on any XX000 error).  We'll soon find
out whether it's useless, or needs more detail to be really helpful,
or is just right as it is.  Once we have some practical experience
with it, we can course-correct as needed.


Based on this, I have committed my original patch.