Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Geoff Winkless
On Wed, 6 May 2020 at 00:05, Tim Cross  wrote:
> Where Tom's solution fails is with smaller companies that cannot afford
> this level of infrastructure.

Is there an objection to openldap? It's lightweight (so could
reasonably be run on the same hardware without significant impact),
BSD-ish and mature, and (with the password policy overlay) should
provide exactly the functionality the OP requested.

Geoff




Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Paul Förster
Hi Geoff,

> On 06. May, 2020, at 10:33, Geoff Winkless  wrote:
> 
> Is there an objection to openldap? It's lightweight (so could
> reasonably be run on the same hardware without significant impact),
> BSD-ish and mature, and (with the password policy overlay) should
> provide exactly the functionality the OP requested.
> 
> Geoff

we use openldap in our company and it works as desired.

Cheers,
Paul





Abnormal Growth of Index Size - Index Size 3x large than table size.

2020-05-06 Thread Ram Pratap Maurya
Hi Team,

We are facing a problem in our PostgreSQL production database related to 
abnormal growth of index size. Some of the indexes are having abnormal growth 
and index size is larger than table data size.
One table is having 75 G.B of index though table size is only 25 G.B. On 
monthly basis we are performing vacuum to release the used space.

I am attaching the screen shot for your reference. Could you please help us in 
resolving the same as this is degrading performance drastically.

[cid:image002.png@01D623B9.565D10F0]


Regards,
Ram Pratap.
Lava International Limited.
Tel+  91-120-4637148
[cid:image001.jpg@01CFD804.C427DF90]




Re: Abnormal Growth of Index Size - Index Size 3x large than table size.

2020-05-06 Thread Ravi Krishna


> On May 6, 2020, at 5:48 AM, Ram Pratap Maurya 
>  wrote:
> 
> Hi Team,
>  
> We are facing a problem in our PostgreSQL production database related to 
> abnormal growth of index size. Some of the indexes are having abnormal growth 
> and index size is larger than table data size.
> One table is having 75 G.B of index though table size is only 25 G.B. On 
> monthly basis we are performing vacuum to release the used space.
>  
> I am attaching the screen shot for your reference. Could you please help us 
> in resolving the same as this is degrading performance drastically.
>  
> 
> 

How many indexes are there in the table tstock_movement?  Could it be that 65GB 
is the sum total of index size of all indexes.



Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Guillaume Lelarge
Le mer. 6 mai 2020 à 04:18, Christian Ramseyer  a écrit :

>
>
> On 05.05.20 16:13, Wolff, Ken L wrote:
> > Hi, everyone.  Wondering if there’s a way in PostgreSQL to automatically
> > lock accounts after a number of failed logins (a security requirement
> > for my organization).
> >
> > Locking accounts after X number of failed logins is an excellent way to
> > defeat brute force attacks, so I’m just wondering if there’s a way to do
> > this, other than the aforementioned hook.
> >
> >
>
> Hi Ken
>
> This doesn't seem mentioned in other replies so far: a very "unixy"
> approach to bolt this feature onto almost any Linux server process is
> the fail2ban (https://github.com/fail2ban/fail2ban) utility. This is a
> daemon that reads arbitrary logfiles, and then triggers an action if
> some failure condition is seen a number of times.
>
> Typically this will scan the logfile for an IP and on failure add a
> temporary firewall rule to block the source, but all of this is
> configurable. So in your case you can lock the account instead, and then
> decide if you want automatic unlocking after a while, if you want to
> drop the IP that tried to login additionally on the firewall as well, etc.
>
> Here is a quick, rough example with still some blanks to fill in - I put
> it on github for readability:
> 
>
> The main blanks are in the postgres-action.conf section. The called
> scripts in /usr/local/bin would need to be written. It can be as simple
> as "psql -c alter role xxx nologin", but you might add some features
> like connecting to the primary server if fail2ban triggered on the
> standby. Also I'm not sure if setting nologin is the best way to disable
> an account, but I'm sure somebody on here could tell you.
>
>
I already knew about fail2ban, but didn't know it could be set up this way.
That's pretty impressive. I've just finished testing your config files, and
it works really well (well, when you finally get rid of the selinux
permission errors :) ). Anyway, thanks a lot for sharing this.


-- 
Guillaume.


Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Tim Cross


Geoff Winkless  writes:

> On Wed, 6 May 2020 at 00:05, Tim Cross  wrote:
>> Where Tom's solution fails is with smaller companies that cannot afford
>> this level of infrastructure.
>
> Is there an objection to openldap? It's lightweight (so could
> reasonably be run on the same hardware without significant impact),
> BSD-ish and mature, and (with the password policy overlay) should
> provide exactly the functionality the OP requested.
>

OpenLDAP is certainly the way I would go. However, for a number of
reasons, smaller companies seem somewhat resistant to that level of
integration. I suspect it is primarily because LDAP skills are less
prevalent amongst admins in these areas. Often, these companies don't
really have a planned architecture - things have grown organically and
got to the point where existing resources are fully allocated just
trying to keep all the bits running. It can be hard to sell the idea,
especially as those making the decisions are not across the issues and
from where they sit, it all looks to be working and your asking for more
resources when it doesn't seem to be broken. The IT guys often fail to
sell the benefits because they focus on the technical aspects rather
than on the business aspects.

One client I helped had admins who had been trying to move everything
over to a centralised LDAP solution for ages and failing. They had
presented great justification for why it was needed, but it focused on
the technical benefits rather than the business continuity, process
improvement and security benefits. Once we put together a new business
case which focused on improved processes for managing access, reduced
security audit costs and improved security controls, they were sold and
made the project a priority. 

Based on additional info I saw from the OP and plans to roll out
many databases, I think a centralised directory service approach is
really their only saleable and maintainable solution. In fact, they
probably need to look at their overall identity management architecture.
Failure to get that basic service correct will result in major support
issue blow out as they increase their customer base.

-- 
Tim Cross




Re: Abnormal Growth of Index Size - Index Size 3x large than table size.

2020-05-06 Thread Christian Ramseyer
Hi

On 06.05.20 11:48, Ram Pratap Maurya wrote:

> We are facing a problem in our PostgreSQL production database related to
> abnormal growth of index size. Some of the indexes are having abnormal
> growth and index size is larger than table data size.
> 
> One table is having 75 G.B of index though table size is only 25 G.B. On
> monthly basis we are performing vacuum to release the used space.
> 
> 
> I am attaching the screen shot for your reference. Could you please help
> us in resolving the same as this is degrading performance drastically.
> 

Under some usage patterns, a periodic REINDEX might be advisible. See
 for more
details, it might free up a lot of space for you.

If it doesn't, you'll need to dive deeper into what this indexes
actually are, if they are really used etc. But in cases of abnormal
growth that gets worse and worse over time, the above is the first thing
to try in my experience.


Cheers
Christian



-- 
Christian Ramseyer, netnea ag
Network Management. Security. OpenSource.
https://www.netnea.com











pg_basebackup inconsistent performance

2020-05-06 Thread Jasen Lentz
We have pg_basebackup running on two of our DB servers that are replicated.  We 
are running postgres 11, and it seems we are getting inconsistent performance 
from the backups and unsure as of why.  We start out at 5-6 hours over a 
dedicated 10G port for 7TB.  It creeps up to 8-9 hours then all of a sudden 
takes 12-16 hours.  There seems to be no rhyme or reason for the extended 
backup times.  The command we use for backups is as follows:

On server 2 (secondary), starts at 4PM
pg_basebackup --pgdata=/opt/postgres/pgbackup/`echo $DATE` --format=plain 
--write-recovery-conf --no-sync --wal-method=stream --checkpoint=fast 
--label=`hostname`-`echo $DATE` --no-verify-checksums --host= 
--username=replication --port=5432

On server 1 (Primary), starts at Midnight
pg_basebackup --pgdata=/opt/postgres/pgbackup/`echo $DATE` --format=plain 
--write-recovery-conf --no-sync --wal-method=stream --checkpoint=fast 
--label=`hostname`-`echo $DATE` --no-verify-checksums --host= 
--username=replication --port=5432

I'm not sure why or how we are running into the weeds.  I am the SysAdmin and 
am not familiar with the inner workings of the DB.  I can pass any commands 
that need run along to our DBA.

>From the OS perspective, we are not seeing any problems with CPU, memory or 
>disk.  We are running on RHEL 7.7

Thanks!




Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Stephen Frost
Greetings,

* Geoff Winkless (pgsqlad...@geoff.dj) wrote:
> On Wed, 6 May 2020 at 00:05, Tim Cross  wrote:
> > Where Tom's solution fails is with smaller companies that cannot afford
> > this level of infrastructure.
> 
> Is there an objection to openldap? It's lightweight (so could
> reasonably be run on the same hardware without significant impact),
> BSD-ish and mature, and (with the password policy overlay) should
> provide exactly the functionality the OP requested.

LDAP-based authentication in PG involves passing the user's password to
the database server in the clear (or tunneled through SSL, but that
doesn't help if the DB is compromised), so it's really not a good
solution.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Encoding conversion: Use replacement character instead of failing query with "ERROR: character with byte sequence 0xd0 0xad in encoding "UTF8" has no equivalent in encoding LATIN1" ?

2020-05-06 Thread Christian Ramseyer



On 06.05.20 02:00, Tom Lane wrote:
> Christian Ramseyer  writes:
>> Can I somehow influence the client:UTF8->server:LATIN1 character set
>> conversion so that instead of failing, it inserts an invalid codepoint
>> character, the utf8 hex bytes as string, drops the character or
>> something like that?
> 
> There's nothing built-in for that, but it seems like it wouldn't be
> hard to modify the code if you wanted a quick hack to do this.
> 
> In general, the system nominally supports multiple conversion functions
> per encoding pair, so you could imagine having an alternate conversion
> that doesn't throw errors.  Problem is that it's quite difficult to get
> the system to actually *use* a non-default conversion for anything really
> significant, like say client I/O.  I don't know that anyone's thought
> hard about how to improve that.
> 

Thanks Tom, that's basically like I suspected how it was, but I wanted
to make sure I'm not overlooking an easy workaround with a simple
"create conversion" or similar.

I really appreciate the quick answers from highly qualified people I'm
getting on here, without exceptions. If only "enterprise" product
support worked like that :)


Cheers
Christian


-- 
Christian Ramseyer, netnea ag
Network Management. Security. OpenSource.
https://www.netnea.com











Re: Abnormal Growth of Index Size - Index Size 3x large than table size.

2020-05-06 Thread Michael Lewis
Indexes larger than the table may be expected if there are many. It may be
prudent to check if they are being used in pg_stat_all_indexes.

If there are just a few indexes that are becoming bloated quickly, you'd
want to ensure your autovacuum settings are tuned more aggressively, and
consider lowering FILLFACTOR on the table to better support heap-only
tuples (HOT) updates such that the index isn't touched when other columns
are updated in the table. If you are on PG12, you can reindex concurrently
assuming you have the space. If not, you can do the same manually like
below-

CREATE INDEX CONCURRENTLY idx_new...
DROP INDEX CONCURRENTLY idx_old...
ALTER INDEX idx_new... RENAME TO idx_old...


RE: EXTERNAL: Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Wolff, Ken L
Thanks again, everyone, for all the responses and ideas.  I'm still getting 
caught up on the various responses but with respect to LDAP/AD, I truly wish it 
were an option.  I agree with the various sentiments that AD authentication is 
more manageable, scalable, secure, etc.  However, if there were one set of 
environs where you'd think we could rely exclusively on AD authentication, it 
would be SQL Server, which by default, relies on Windows & AD for its 
authentication.  However, for our company, even in our SQL Server environments, 
we almost always have to resort to internal (SQL-authenticated) accounts at 
times for various reasons:  usually because vendor software doesn't support AD 
authentication, but I've even heard some people mention docker containers can't 
use it, either.  Full disclosure - I haven't run that last one down yet, have 
only heard it in passing so don't know the details.

Christian's idea of fail2ban looks interesting, so I'm going to be 
investigating that.  

Thanks again, all of you.  Really appreciate the feedback and ideas!


Ken

-Original Message-
From: Stephen Frost  
Sent: Wednesday, May 06, 2020 7:28 AM
To: Geoff Winkless 
Cc: Tim Cross ; pgsql-generallists.postgresql.org 

Subject: EXTERNAL: Re: Lock Postgres account after X number of failed logins?

Greetings,

* Geoff Winkless (pgsqlad...@geoff.dj) wrote:
> On Wed, 6 May 2020 at 00:05, Tim Cross  wrote:
> > Where Tom's solution fails is with smaller companies that cannot 
> > afford this level of infrastructure.
> 
> Is there an objection to openldap? It's lightweight (so could 
> reasonably be run on the same hardware without significant impact), 
> BSD-ish and mature, and (with the password policy overlay) should 
> provide exactly the functionality the OP requested.

LDAP-based authentication in PG involves passing the user's password to the 
database server in the clear (or tunneled through SSL, but that doesn't help if 
the DB is compromised), so it's really not a good solution.

Thanks,

Stephen




Re: EXTERNAL: Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Stephen Frost
Greetings,

* Wolff, Ken L (ken.l.wo...@lmco.com) wrote:
> Thanks again, everyone, for all the responses and ideas.  I'm still getting 
> caught up on the various responses but with respect to LDAP/AD, I truly wish 
> it were an option.  I agree with the various sentiments that AD 
> authentication is more manageable, scalable, secure, etc.  However, if there 
> were one set of environs where you'd think we could rely exclusively on AD 
> authentication, it would be SQL Server, which by default, relies on Windows & 
> AD for its authentication.  However, for our company, even in our SQL Server 
> environments, we almost always have to resort to internal (SQL-authenticated) 
> accounts at times for various reasons:  usually because vendor software 
> doesn't support AD authentication, but I've even heard some people mention 
> docker containers can't use it, either.  Full disclosure - I haven't run that 
> last one down yet, have only heard it in passing so don't know the details.

At least as it involves vendor software, most of that is built on top of
libpq or JDBC and you can typically make them work with GSSAPI (which is
basically Kerberos, and is what AD/SQL Server uses), which is what you
want to be using.  Don't think the "ldap" auth in PG is like SQL Server
AD auth- it isn't, and "ldap" involves passing user's passwords around
in the clear, it's not secure.  So, you might not have as much need for
local accounts as you do for SQL server, but it's certainly possible
you'll end up needing them somewhere.

And yes, you can certainly get GSSAPI/Kerberos to work in docker and in
Kube, it's just more complicated due to sorting through DNS/rDNS and
such, but it's been done (and I've done it :).

Thanks!

Stephen


signature.asc
Description: PGP signature


RE: Abnormal Growth of Index Size - Index Size 3x large than table size.

2020-05-06 Thread Ram Pratap Maurya
Hello Ravi,

Total number of index is 10 and 65GB is the sum total of index size of all 
indexes for table "tstock_movement"
I am attaching the screen shot for your reference.

[cid:image001.png@01D623D7.65173990]


Regards,
Ram Pratap.
Lava International Limited.
Tel+  91-120-4637148
[cid:image001.jpg@01CFD804.C427DF90]


From: Ravi Krishna [mailto:srkrish...@comcast.net]
Sent: 06 May 2020 16:28
To: Ram Pratap Maurya
Cc: pgsql-gene...@postgresql.org; Ashish Chugh
Subject: Re: Abnormal Growth of Index Size - Index Size 3x large than table 
size.




On May 6, 2020, at 5:48 AM, Ram Pratap Maurya 
mailto:ram.mau...@lavainternational.in>> wrote:

Hi Team,

We are facing a problem in our PostgreSQL production database related to 
abnormal growth of index size. Some of the indexes are having abnormal growth 
and index size is larger than table data size.
One table is having 75 G.B of index though table size is only 25 G.B. On 
monthly basis we are performing vacuum to release the used space.

I am attaching the screen shot for your reference. Could you please help us in 
resolving the same as this is degrading performance drastically.




How many indexes are there in the table tstock_movement?  Could it be that 65GB 
is the sum total of index size of all indexes.



Re: pg_basebackup inconsistent performance

2020-05-06 Thread Adrian Klaver

On 5/6/20 5:44 AM, Jasen Lentz wrote:
We have pg_basebackup running on two of our DB servers that are 
replicated.  We are running postgres 11, and it seems we are getting 
inconsistent performance from the backups and unsure as of why.  We 
start out at 5-6 hours over a dedicated 10G port for 7TB.  It creeps up 
to 8-9 hours then all of a sudden takes 12-16 hours.  There seems to be 


I'm guessing the above happens from one run to another correct?

Where are the machines you are backing up from/to relative to each on 
the network?


Is there increased activity on the database servers e.g. inserts, 
updates, etc during the extended backups?


no rhyme or reason for the extended backup times.  The command we use 
for backups is as follows:


On server 2 (secondary), starts at 4PM

pg_basebackup --pgdata=/opt/postgres/pgbackup/`echo $DATE` 
--format=plain --write-recovery-conf --no-sync --wal-method=stream 
--checkpoint=fast --label=`hostname`-`echo $DATE` --no-verify-checksums 
--host= --username=replication --port=5432


On server 1 (Primary), starts at Midnight

pg_basebackup --pgdata=/opt/postgres/pgbackup/`echo $DATE` 
--format=plain --write-recovery-conf --no-sync --wal-method=stream 
--checkpoint=fast --label=`hostname`-`echo $DATE` --no-verify-checksums 
--host= --username=replication --port=5432


I’m not sure why or how we are running into the weeds.  I am the 
SysAdmin and am not familiar with the inner workings of the DB.  I can 
pass any commands that need run along to our DBA.


 From the OS perspective, we are not seeing any problems with CPU, 
memory or disk.  We are running on RHEL 7.7


Thanks!




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_basebackup inconsistent performance

2020-05-06 Thread Adrian Klaver

On 5/6/20 5:44 AM, Jasen Lentz wrote:
We have pg_basebackup running on two of our DB servers that are 
replicated.  We are running postgres 11, and it seems we are getting 
inconsistent performance from the backups and unsure as of why.  We 
start out at 5-6 hours over a dedicated 10G port for 7TB.  It creeps up 
to 8-9 hours then all of a sudden takes 12-16 hours.  There seems to be 


I'm guessing the above happens from one run to another correct?

Where are the machines you are backing up from/to relative to each on 
the network?


Is there increased activity on the database servers e.g. inserts, 
updates, etc during the extended backups?


no rhyme or reason for the extended backup times.  The command we use 
for backups is as follows:


On server 2 (secondary), starts at 4PM

pg_basebackup --pgdata=/opt/postgres/pgbackup/`echo $DATE` 
--format=plain --write-recovery-conf --no-sync --wal-method=stream 
--checkpoint=fast --label=`hostname`-`echo $DATE` --no-verify-checksums 
--host= --username=replication --port=5432


On server 1 (Primary), starts at Midnight

pg_basebackup --pgdata=/opt/postgres/pgbackup/`echo $DATE` 
--format=plain --write-recovery-conf --no-sync --wal-method=stream 
--checkpoint=fast --label=`hostname`-`echo $DATE` --no-verify-checksums 
--host= --username=replication --port=5432


I’m not sure why or how we are running into the weeds.  I am the 
SysAdmin and am not familiar with the inner workings of the DB.  I can 
pass any commands that need run along to our DBA.


 From the OS perspective, we are not seeing any problems with CPU, 
memory or disk.  We are running on RHEL 7.7


Thanks!




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Abnormal Growth of Index Size - Index Size 3x large than table size.

2020-05-06 Thread Ravi Krishna
> 
> Hello Ravi,
>  
> Total number of index is 10 and 65GB is the sum total of index size of all 
> indexes for table “tstock_movement”
> I am attaching the screen shot for your reference.

>  
In that case 65GB is not surprising.



Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Peter J. Holzer
On 2020-05-06 09:28:28 -0400, Stephen Frost wrote:
> LDAP-based authentication in PG involves passing the user's password to
> the database server in the clear (or tunneled through SSL, but that
> doesn't help if the DB is compromised), so it's really not a good
> solution.

Still a lot better than PostgreSQL's md5 scheme, which stores
password-equivalent hashes: If the database is compromised the attacker
has all hashes immediately and can use them to login. Intercepting
encrypted traffic even at the endpoint is much harder and can only
uncover passwords actually used.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Magnus Hagander
On Wed, May 6, 2020 at 5:26 PM Peter J. Holzer  wrote:

> On 2020-05-06 09:28:28 -0400, Stephen Frost wrote:
> > LDAP-based authentication in PG involves passing the user's password to
> > the database server in the clear (or tunneled through SSL, but that
> > doesn't help if the DB is compromised), so it's really not a good
> > solution.
>
> Still a lot better than PostgreSQL's md5 scheme, which stores
> password-equivalent hashes: If the database is compromised the attacker
> has all hashes immediately and can use them to login. Intercepting
> encrypted traffic even at the endpoint is much harder and can only
> uncover passwords actually used.
>

If the database is compromised the attacker already has the data, though,
so not as many needs to log in anymore.

But more to the point -- one should not use md5 in PostgreSQL these days,
one should be using scram-sha-256 which does not have this problem (and has
been around for a few years by now)., if using local database logins.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: Abnormal Growth of Index Size - Index Size 3x large than table size.

2020-05-06 Thread Ravi Krishna


> On May 6, 2020, at 10:52 AM, Ashish Chugh  
> wrote:
> 
> Hello Ravi, 
> 
> 
> Total number of indexes are 10 and size is 65 GB. Shall we consider this as a 
> normal scenario or we need to look into the growth of the indexes as this is 
> increasing day by day and table data is not increasing so drastically. Due to 
> this performance degradation is there and we have to run full vacuum on 
> monthly basis.
> 
> Table size is only 25 gb.
> 
> Any help in this regard is appreciable.

Indexes are stored just like tables. From storage perspective there is no 
difference between a table and an index.
So the sum of 10 different tables to 65GB, compared to 25GB of one table sounds 
possible.



Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Stephen Frost
Greetings,

* Peter J. Holzer (hjp-pg...@hjp.at) wrote:
> On 2020-05-06 09:28:28 -0400, Stephen Frost wrote:
> > LDAP-based authentication in PG involves passing the user's password to
> > the database server in the clear (or tunneled through SSL, but that
> > doesn't help if the DB is compromised), so it's really not a good
> > solution.
> 
> Still a lot better than PostgreSQL's md5 scheme, which stores
> password-equivalent hashes: If the database is compromised the attacker
> has all hashes immediately and can use them to login. Intercepting
> encrypted traffic even at the endpoint is much harder and can only
> uncover passwords actually used.

No, it's really not better because when you're talking about LDAP it's
usually in reference to AD or similar centralized data store- so now you
get a user's credentials not *just* for access to the particular PG
database that you've compromised but across the *entire* AD environment.

If you just compromise the md5 store (which you shouldn't really be
using anyway, but whatever) then, sure, you can use that PW equivilant
to get access into the DB that you've already compromised, and if they
use the same username for other PG databases then maybe those too, but
you don't get access to their VPN credentials, or the ability to RDP to
any server they're allowed to log in to, or to the SQL server databases
they have access to, or, or, or ...

(at least, not without cracking the md5 hash, which requires at least a
little bit of additional effort and we do "salt" it with the username so
it's not completely trivial...  still, please, please, please use SCRAM
for local logins, at least..)

Thanks,

Stephen


signature.asc
Description: PGP signature


White space affecting parsing of range values

2020-05-06 Thread Thom Brown
Hi,

I noticed I'm getting an error when adding white space to a numeric
range.  I can run this:

postgres=# SELECT 5::numeric <@ '(,10]'::numrange;
 ?column?
--
 t
(1 row)

But I can't run this:

postgres=# SELECT 5::numeric <@ '(  ,10]'::numrange;
ERROR:  invalid input syntax for type numeric: "  "
LINE 1: SELECT 5::numeric <@ '(  ,10]'::numrange;
 ^
If one had constructed a series of ranges, and wanted to line them up
vertically for easy comprehension, this wouldn't be possible.

This doesn't seem to be a problem with regular numeric values:

postgres=# SELECT ' 3 '::numeric;
 numeric
-
   3
(1 row)


Shouldn't white space be ignored in range values?

-- 
Thom




RE: pg_basebackup inconsistent performance

2020-05-06 Thread Jasen Lentz
I'm guessing the above happens from one run to another correct?
Yes

Where are the machines you are backing up from/to relative to each on the 
network?
Direct ethernet connection between 10G network interfaces

Is there increased activity on the database servers e.g. inserts, updates, etc 
during the extended backups?
Not according to sar reports



Jasen M. Lentz, M.Ed
Lead Systems Administrator
Sesco Enterprises, LLC
4977 State Route 30 East (Mailing Address Only)
Greensburg, PA 15601
W:  (724) 837-1991 x207
C:  (412) 848-5612


-Original Message-
From: Adrian Klaver  
Sent: Wednesday, May 6, 2020 10:28 AM
To: Jasen Lentz ; pgsql-general@lists.postgresql.org
Subject: Re: pg_basebackup inconsistent performance

On 5/6/20 5:44 AM, Jasen Lentz wrote:
> We have pg_basebackup running on two of our DB servers that are 
> replicated.  We are running postgres 11, and it seems we are getting 
> inconsistent performance from the backups and unsure as of why.  We 
> start out at 5-6 hours over a dedicated 10G port for 7TB.  It creeps 
> up to 8-9 hours then all of a sudden takes 12-16 hours.  There seems 
> to be

I'm guessing the above happens from one run to another correct?

Where are the machines you are backing up from/to relative to each on the 
network?

Is there increased activity on the database servers e.g. inserts, updates, etc 
during the extended backups?

> no rhyme or reason for the extended backup times.  The command we use 
> for backups is as follows:
> 
> On server 2 (secondary), starts at 4PM
> 
> pg_basebackup --pgdata=/opt/postgres/pgbackup/`echo $DATE` 
> --format=plain --write-recovery-conf --no-sync --wal-method=stream 
> --checkpoint=fast --label=`hostname`-`echo $DATE` 
> --no-verify-checksums --host= --username=replication 
> --port=5432
> 
> On server 1 (Primary), starts at Midnight
> 
> pg_basebackup --pgdata=/opt/postgres/pgbackup/`echo $DATE` 
> --format=plain --write-recovery-conf --no-sync --wal-method=stream 
> --checkpoint=fast --label=`hostname`-`echo $DATE` 
> --no-verify-checksums --host= --username=replication 
> --port=5432
> 
> I'm not sure why or how we are running into the weeds.  I am the 
> SysAdmin and am not familiar with the inner workings of the DB.  I can 
> pass any commands that need run along to our DBA.
> 
>  From the OS perspective, we are not seeing any problems with CPU, 
> memory or disk.  We are running on RHEL 7.7
> 
> Thanks!
> 


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: White space affecting parsing of range values

2020-05-06 Thread Adrian Klaver

On 5/6/20 9:00 AM, Thom Brown wrote:

Hi,

I noticed I'm getting an error when adding white space to a numeric
range.  I can run this:

postgres=# SELECT 5::numeric <@ '(,10]'::numrange;
  ?column?
--
  t
(1 row)

But I can't run this:

postgres=# SELECT 5::numeric <@ '(  ,10]'::numrange;
ERROR:  invalid input syntax for type numeric: "  "
LINE 1: SELECT 5::numeric <@ '(  ,10]'::numrange;
  ^
If one had constructed a series of ranges, and wanted to line them up
vertically for easy comprehension, this wouldn't be possible.

This doesn't seem to be a problem with regular numeric values:

postgres=# SELECT ' 3 '::numeric;
  numeric
-
3
(1 row)


Shouldn't white space be ignored in range values?



https://www.postgresql.org/docs/12/rangetypes.html

"Whitespace is allowed before and after the range value, but any 
whitespace between the parentheses or brackets is taken as part of the 
lower or upper bound value. (Depending on the element type, it might or 
might not be significant.)

"


SELECT 5::numeric <@ '(00,10]'::numrange;
 ?column?
--
 t


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: White space affecting parsing of range values

2020-05-06 Thread Thom Brown
On Wed, 6 May 2020 at 17:13, Adrian Klaver  wrote:
>
> On 5/6/20 9:00 AM, Thom Brown wrote:
> > Hi,
> >
> > I noticed I'm getting an error when adding white space to a numeric
> > range.  I can run this:
> >
> > postgres=# SELECT 5::numeric <@ '(,10]'::numrange;
> >   ?column?
> > --
> >   t
> > (1 row)
> >
> > But I can't run this:
> >
> > postgres=# SELECT 5::numeric <@ '(  ,10]'::numrange;
> > ERROR:  invalid input syntax for type numeric: "  "
> > LINE 1: SELECT 5::numeric <@ '(  ,10]'::numrange;
> >   ^
> > If one had constructed a series of ranges, and wanted to line them up
> > vertically for easy comprehension, this wouldn't be possible.
> >
> > This doesn't seem to be a problem with regular numeric values:
> >
> > postgres=# SELECT ' 3 '::numeric;
> >   numeric
> > -
> > 3
> > (1 row)
> >
> >
> > Shouldn't white space be ignored in range values?
> >
>
> https://www.postgresql.org/docs/12/rangetypes.html
>
> "Whitespace is allowed before and after the range value, but any
> whitespace between the parentheses or brackets is taken as part of the
> lower or upper bound value. (Depending on the element type, it might or
> might not be significant.)
> "

I guess I should read the docs more carefully.  Shouldn't this be
insignificant for a numeric value?

> SELECT 5::numeric <@ '(00,10]'::numrange;
>   ?column?
> --
>   t

Your example isn't equivalent to mine.  That sets a lower bound.

Thom




Re: pg_basebackup inconsistent performance

2020-05-06 Thread Stephen Frost
Greetings,

* Jasen Lentz (jle...@sescollc.com) wrote:
> Where are the machines you are backing up from/to relative to each on the 
> network?
> Direct ethernet connection between 10G network interfaces

Is the backup server shared among other systems..?

> Is there increased activity on the database servers e.g. inserts, updates, 
> etc during the extended backups?
> Not according to sar reports

And there's no increased activity on the backup server either?

Have you looked at network traffic for the duration?  And/or disk i/o on
each system?  If you ran a backup once and then immediately after and
that's the 'fast' case then you may be seeing performance be better due
to a lot of data being in the filesystem cache.  pg_basebackup being
single-threaded probably doesn't help here either, you might want to
consider one of the parallel-backup options.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: White space affecting parsing of range values

2020-05-06 Thread Adrian Klaver

On 5/6/20 9:19 AM, Thom Brown wrote:

On Wed, 6 May 2020 at 17:13, Adrian Klaver  wrote:


On 5/6/20 9:00 AM, Thom Brown wrote:

Hi,

I noticed I'm getting an error when adding white space to a numeric
range.  I can run this:

postgres=# SELECT 5::numeric <@ '(,10]'::numrange;
   ?column?
--
   t
(1 row)

But I can't run this:

postgres=# SELECT 5::numeric <@ '(  ,10]'::numrange;
ERROR:  invalid input syntax for type numeric: "  "
LINE 1: SELECT 5::numeric <@ '(  ,10]'::numrange;
   ^
If one had constructed a series of ranges, and wanted to line them up
vertically for easy comprehension, this wouldn't be possible.

This doesn't seem to be a problem with regular numeric values:

postgres=# SELECT ' 3 '::numeric;
   numeric
-
 3
(1 row)


Shouldn't white space be ignored in range values?



https://www.postgresql.org/docs/12/rangetypes.html

"Whitespace is allowed before and after the range value, but any
whitespace between the parentheses or brackets is taken as part of the
lower or upper bound value. (Depending on the element type, it might or
might not be significant.)
"


I guess I should read the docs more carefully.  Shouldn't this be
insignificant for a numeric value?


No:

select ' '::numeric;
ERROR:  invalid input syntax for type numeric: " "
LINE 1: select ' '::numeric;




SELECT 5::numeric <@ '(00,10]'::numrange;
   ?column?
--
   t


Your example isn't equivalent to mine.  That sets a lower bound.


SELECT 5::numeric <@ numrange(NULL  ,10, '(]');
 ?column?
--
 t

From previous link:

"-- Using NULL for either bound causes the range to be unbounded on that 
side.

SELECT numrange(NULL, 2.2);"



Thom




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: White space affecting parsing of range values

2020-05-06 Thread Tom Lane
Thom Brown  writes:
> I guess I should read the docs more carefully.  Shouldn't this be
> insignificant for a numeric value?

That would require the range code to know whether the subtype considers
whitespace significant (or perhaps more usefully, whether an all-spaces
input is valid).  We've stayed away from requiring range_in to have any
type-specific knowledge of that sort.

Still, you could argue that the rule ought to be "an empty or all-blank
value must be quoted to distinguish it from an omitted bound" rather than
"an empty value must be quoted to distinguish it from an omitted bound".

I'm not sure if we could get away with redefining that at this point,
though.  It looks like range_out quotes such values already, so maybe a
change wouldn't be totally catastrophic (in the sense of breaking dump
files).  But I still suspect there would be more people unhappy than
happy.

regards, tom lane




Re: White space affecting parsing of range values

2020-05-06 Thread Thom Brown
On Wed, 6 May 2020 at 17:30, Adrian Klaver  wrote:
>
> On 5/6/20 9:19 AM, Thom Brown wrote:
> > On Wed, 6 May 2020 at 17:13, Adrian Klaver  
> > wrote:
> >>
> >> On 5/6/20 9:00 AM, Thom Brown wrote:
> >>> Hi,
> >>>
> >>> I noticed I'm getting an error when adding white space to a numeric
> >>> range.  I can run this:
> >>>
> >>> postgres=# SELECT 5::numeric <@ '(,10]'::numrange;
> >>>?column?
> >>> --
> >>>t
> >>> (1 row)
> >>>
> >>> But I can't run this:
> >>>
> >>> postgres=# SELECT 5::numeric <@ '(  ,10]'::numrange;
> >>> ERROR:  invalid input syntax for type numeric: "  "
> >>> LINE 1: SELECT 5::numeric <@ '(  ,10]'::numrange;
> >>>^
> >>> If one had constructed a series of ranges, and wanted to line them up
> >>> vertically for easy comprehension, this wouldn't be possible.
> >>>
> >>> This doesn't seem to be a problem with regular numeric values:
> >>>
> >>> postgres=# SELECT ' 3 '::numeric;
> >>>numeric
> >>> -
> >>>  3
> >>> (1 row)
> >>>
> >>>
> >>> Shouldn't white space be ignored in range values?
> >>>
> >>
> >> https://www.postgresql.org/docs/12/rangetypes.html
> >>
> >> "Whitespace is allowed before and after the range value, but any
> >> whitespace between the parentheses or brackets is taken as part of the
> >> lower or upper bound value. (Depending on the element type, it might or
> >> might not be significant.)
> >> "
> >
> > I guess I should read the docs more carefully.  Shouldn't this be
> > insignificant for a numeric value?
>
> No:
>
> select ' '::numeric;
> ERROR:  invalid input syntax for type numeric: " "
> LINE 1: select ' '::numeric;
>
> >
> >> SELECT 5::numeric <@ '(00,10]'::numrange;
> >>?column?
> >> --
> >>t
> >
> > Your example isn't equivalent to mine.  That sets a lower bound.
>
> SELECT 5::numeric <@ numrange(NULL  ,10, '(]');
>   ?column?
> --
>   t

Yes, I guess the numrange function would be a decent substitute in this case.

>  From previous link:
>
> "-- Using NULL for either bound causes the range to be unbounded on that
> side.
> SELECT numrange(NULL, 2.2);"

-- 
Thom




Re: White space affecting parsing of range values

2020-05-06 Thread Thom Brown
On Wed, 6 May 2020 at 17:33, Tom Lane  wrote:
>
> Thom Brown  writes:
> > I guess I should read the docs more carefully.  Shouldn't this be
> > insignificant for a numeric value?
>
> That would require the range code to know whether the subtype considers
> whitespace significant (or perhaps more usefully, whether an all-spaces
> input is valid).  We've stayed away from requiring range_in to have any
> type-specific knowledge of that sort.
>
> Still, you could argue that the rule ought to be "an empty or all-blank
> value must be quoted to distinguish it from an omitted bound" rather than
> "an empty value must be quoted to distinguish it from an omitted bound".
>
> I'm not sure if we could get away with redefining that at this point,
> though.  It looks like range_out quotes such values already, so maybe a
> change wouldn't be totally catastrophic (in the sense of breaking dump
> files).  But I still suspect there would be more people unhappy than
> happy.

Okay, I see that this isn't really worth changing.  It's surprising
behaviour, but I can see it's not a huge issue, and can be worked
around anyway.

Thanks

--
Thom




previous replication slot and new initdb

2020-05-06 Thread Support

Hi Folks,

is there a way to refresh/reset/reinit the internal replication slot 
identifier

for a freshly new hot standby initdb and pg_dumpall?
For now the master complain that the identifier (a bigint) of the node 
does not match with
the replication slot identifier. So if there is a way to reset the slot 
and reuse it for the new initdb it would be great


thanks

David




Re: Lock Postgres account after X number of failed logins?

2020-05-06 Thread Geoff Winkless
On Wed, 6 May 2020, 14:28 Stephen Frost,  wrote:

> Greetings,
>
> * Geoff Winkless (pgsqlad...@geoff.dj) wrote:
> > On Wed, 6 May 2020 at 00:05, Tim Cross  wrote:
> > > Where Tom's solution fails is with smaller companies that cannot afford
> > > this level of infrastructure.
> >
> > Is there an objection to openldap?
>
> LDAP-based authentication in PG involves passing the user's password to
> the database server in the clear (or tunneled through SSL, but that
> doesn't help if the DB is compromised), so it's really not a good
> solution
>

If your DB is compromised then (if the LDAP server is only used for the db)
what difference does it make to lose the passwords?

I was (as per the thread) suggesting a simple way for small companies to
achieve the OP's requirements without a large infrastructure investment and
without involving the pg team undertaking the rediscovery of novel circular
transportation-assisting devices.

Any large company will have an AD or similar setup already, clearly I'm not
suggesting using it in that situation.

AIUI you can configure kerberos with openldap if that's more your thing,
fwiw, but then IME the learning curve (and thus setup cost) increases
exponentially.

Geoff


New Role drop with Grant/Revokes stop working after subsequent runs

2020-05-06 Thread AC Gomez
Hi,

On PostgreSQL 9.6.

We have developed some code that creates a new role to be used as the main
role for DB usage. This code will be called on a predetermined frequency to
act a role/pwd rotation mechanism.

Each time the code is run we feed it the prior role that was created (the
Db owner being the initial role fed in).

The first time the code runs, it works as expected, ie, new user and pwd
created with all appropriate grants. Also, on the very first run Revokes
not done for the DB Owner because we want to keep db owner.

The second time we run the code we feed the prior new user created and that
goes as expected, ie, new role and pwd with all grants granted and prior
user's grants revoked and prior user deleted. No errors.

The third time we run it, we feed in the prior created user and as
expected, the user is created. However, this time GRANTS and REVOKES do not
take effect even though there aren't any errors. The only error this time
is that when the DROP ROLE command is issued an error is thrown saying that
the prior role cannot be dropped because it has dependencies. While the
error is correct, this is not expected, given the prior runs. When I check
for new user Grants and prior User revokes, they were not applied despite
the commands having run without error. I know they ran because I have
logging after each command runs, which would not happen if an error were to
be thrown.

This code does not run on a loop so there isn't a loop variable that goes
awry after the second run. And further, there is no state which we save
from prior runs other than user/password.

I suppose the main question is, why would a bunch of grant and revoke
commands run and not do anything, not even throw an error?

I can see why the process would have run without issue on the first run as
it was using the db master role. But after that, this is working with newly
created roles, so if there was a failure to be had it should have happened
on the second run. yet it does tead fails on the third run??

Here is a summary of the process:

   1. START
  1. We begin with the db owner role as the bootstrap seed - but
  subsequent runs feed in successive users.
  2. With this role we create a new user/password, for example: CREATE
  USER UUU WITH PASSWORD 'PPpp' CREATEDB CREATEROLE
  3. GRANTS
  4. GRANT  TO 
  5. For each Data
  6. For each Schema


   1. GRANT USAGE ON SCHEMA  TO 
2. GRANT CREATE ON SCHEMA  TO 
3. GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA  TO

4. GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA  TO

5. GRANT EXECUTE ON ALL FUNCTIONS

 ii.  GRANT
ALL DEFAULT PRIVILEGES

   iii.  GRANT
POSTGRES_FDW

   iv.  GRANT
FOREIGN SERVER

end loop; end loop;

   1. REVOKES

 i.  GRANT  TO 

ii.  REASSIGN OWNED BY  TO 

iii.  DROP OWNED BY  TO 

   1. For each Database
  2.For each Schema


   1. REVOKE USAGE ON SCHEMA  TO 
2. REVOKE CREATE ON SCHEMA  TO 
3. REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA  TO

4. REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA  TO

5. REVOKE EXECUTE ON ALL FUNCTIONS

 ii.  REVOKE
ALL DEFAULT PRIVILEGES

   iii.  REVOKE
POSTGRES_FDW

   iv.  REVOKE
FOREIGN SERVERS

end loop; end loop;

   1. DROP ROLE  (if it's not the db owner)


Re: New Role drop with Grant/Revokes stop working after subsequent runs

2020-05-06 Thread David G. Johnston
On Wed, May 6, 2020 at 5:05 PM AC Gomez  wrote:

> We have developed some code that creates a new role to be used as the main
> role for DB usage. This code will be called on a predetermined frequency to
> act a role/pwd rotation mechanism.
>

> Each time the code is run we feed it the prior role that was created (the
> Db owner being the initial role fed in).
>

Frankly, I don't know why your algorithm is failing to work but I'd suggest
you implement a better algorithm.

Ownership and permissions are granted to roles (groups) that are not
allowed to login.
Login roles are made members of the group roles.

I suppose the main question is, why would a bunch of grant and revoke
> commands run and not do anything, not even throw an error?
>

Maybe its a bug? - I doubt this kind of manipulation is all that common or
tested given the presence of what seems to be a superior alternative.

David J.


>
>


pg_restore V12 fails consistently against piped pg_dumps

2020-05-06 Thread Mohamed Wael Khobalatte
Hi all,

When doing a parallel pg_restore (v12) against a dump created through a
pipe using an earlier version (11 all the way to 9.6), it fails with the
known error of not finding data offsets. I understand the reasons for it
(potential inability to seek), which is documented in pg_restore.

What I don't understand is why the same `pg_restore -j` worked in earlier
versions (say running pg_restore_v11 against the same dumps). Has anything
changed in terms of ordering? I am actually quite curious what led to this
finally breaking consistently.

I also timed things, and found that pg_restore_12 against a pg_dump_12
takes about the same time as pg_restore_11 vs pg_dump_11, on the same
dataset. I initially did this to see if for some reason pg_restore_v11 was
compensating for the lack of the offsets, but this didn't seem to be the
case (or perhaps the dump is too small to matter, around 1.5GB).

-- 
Wael


Re: pg_restore V12 fails consistently against piped pg_dumps

2020-05-06 Thread Tom Lane
Mohamed Wael Khobalatte  writes:
> When doing a parallel pg_restore (v12) against a dump created through a
> pipe using an earlier version (11 all the way to 9.6), it fails with the
> known error of not finding data offsets. I understand the reasons for it
> (potential inability to seek), which is documented in pg_restore.

> What I don't understand is why the same `pg_restore -j` worked in earlier
> versions (say running pg_restore_v11 against the same dumps). Has anything
> changed in terms of ordering? I am actually quite curious what led to this
> finally breaking consistently.

Without a concrete example it's hard to say, but maybe the issue is that
v12 is more aggressive about parallelizing restores --- see 548e50976.

regards, tom lane