Re: pg_restore enhancements

2023-11-22 Thread Ron Johnson
On Wed, Nov 22, 2023 at 2:28 PM Tom Lane  wrote:

> "Efrain J. Berdecia"  writes:
> > Thanks, the issue we've run into, which I guess could be really a setup
> issue, with running a COPY command while executing pg_restore, is that if
> we are restoring a large table (bigger than 500GB) our WAL directory can
> grow to be very large.
> > I would think that if the pg_restore or COPY command was able to support
> a batch-size option, this should allow postgres to either archive or remove
> wal files and prevent having to re-size the WAL directory for a one time
> refresh operation.
> > I'm trying to gage how feasible would be to start looking at
> contributing to add such a feature to either the COPY command or pg_restore.
>
> Given the shortage of other complaints, I tend to agree with Adrian
> that there's not likely to be much interest in adding complexity
> to pg_restore (or COPY) to address this.  You should probably look
> harder at the idea that you have some configuration problem that's
> triggering your WAL bloat.  If COPY can run you out of WAL space,
> then so could any future bulk insert or update.
>

What OP needs, I think, since I'd use it, too, is "pg_bulkload without the
intrusive hacks and restrictions".


Re: pg_restore enhancements

2023-11-22 Thread Tom Lane
"Efrain J. Berdecia"  writes:
> Thanks, the issue we've run into, which I guess could be really a setup 
> issue, with running a COPY command while executing pg_restore, is that if we 
> are restoring a large table (bigger than 500GB) our WAL directory can grow to 
> be very large.
> I would think that if the pg_restore or COPY command was able to support a 
> batch-size option, this should allow postgres to either archive or remove wal 
> files and prevent having to re-size the WAL directory for a one time refresh 
> operation.
> I'm trying to gage how feasible would be to start looking at contributing to 
> add such a feature to either the COPY command or pg_restore.

Given the shortage of other complaints, I tend to agree with Adrian
that there's not likely to be much interest in adding complexity
to pg_restore (or COPY) to address this.  You should probably look
harder at the idea that you have some configuration problem that's
triggering your WAL bloat.  If COPY can run you out of WAL space,
then so could any future bulk insert or update.

regards, tom lane




Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Adrian Klaver


On 11/22/23 10:01 AM, Adrian Klaver wrote:



On 11/22/23 9:55 AM, Andreas Kretschmer wrote:



Am 22.11.23 um 18:44 schrieb Atul Kumar:

I am giving this command
psql -d postgres -U postgres -p 5432 -h localhost
Then only I get that error.


so localhost resolved to an IPv6 - address ...



Yeah, you should take a look at:

/etc/hosts


In meantime include a  line for IPv6 in pg_hba.conf. where the address 
would be:


::1/128


Or you could change


host all postgres 127.0.0.1/32 scram-sha-256


to


host all postgres localhost scram-sha-256





Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Adrian Klaver


On 11/22/23 10:03 AM, Atul Kumar wrote:
Please can you share any command  for due diligence whether ip is 
resolved to ipv6 ?.



This:

psql -d postgres -U postgres -p 5432 -h localhost

where pretty sure

/etc/hosts

is resolving localhost --> ::1



On Wed, Nov 22, 2023 at 11:25 PM Andreas Kretschmer 
 wrote:




Am 22.11.23 um 18:44 schrieb Atul Kumar:
> I am giving this command
> psql -d postgres -U postgres -p 5432 -h localhost
> Then only I get that error.

so localhost resolved to an IPv6 - address ...

>
> but when I  pass ip or hostname of the local server then I don't
get
> such error message
> 1. psql -d postgres -U postgres -p 5432 -h 
> 2. psql -d postgres -U postgres -p 5432 -h 

resolves to an IPv4 - address. you can see the difference?

localhost != iv4-address != hostname with ipv4 address

Andreas

>
>
> I don;t get that error while using the above two commands.
>
>
> Regards.
>
>
> On Wed, Nov 22, 2023 at 10:45 PM Adrian Klaver
>  wrote:
>
>     On 11/22/23 09:03, Atul Kumar wrote:
>     > The entries that I changed were to replace the md5 with
>     scram-sha-256
>     > and remove unnecessary remote IPs.
>
>     FYI from:
>
> https://www.postgresql.org/docs/current/auth-password.html
>
>     md5
>
>          The method md5 uses a custom less secure challenge-response
>     mechanism. It prevents password sniffing and avoids storing
>     passwords on
>     the server in plain text but provides no protection if an
attacker
>     manages to steal the password hash from the server. Also,
the MD5
>     hash
>     algorithm is nowadays no longer considered secure against
determined
>     attacks.
>
>          The md5 method cannot be used with the
db_user_namespace feature.
>
>          To ease transition from the md5 method to the newer SCRAM
>     method,
>     if md5 is specified as a method in pg_hba.conf but the user's
>     password
>     on the server is encrypted for SCRAM (see below), then
SCRAM-based
>     authentication will automatically be chosen instead.
>
>     >
>     > But it has nothing to do with connecting the server
locally with
>     "psql
>     > -d postgres -U postgres -h localhost"
>
>     The error:
>
>     no pg_hba.conf entry for host "::1", user "postgres", database
>     "postgres
>
>
>     says it does and the error is correct as you do not have an IPv6
>     entry
>     for localhost in pg_hba.conf. At least in the snippet you
showed us.
>
>
>     >
>     > But when I try to connect it locally I get this error. So
it is
>     related
>
>     When you say connect locally do you mean to localhost or to
>     local(socket)?
>
>     > to local connections only and when I pass the hostname or
ip of the
>     > server it works fine without any issue.
>     >
>     >
>     > Regards.
>     >
>
>     --
>     Adrian Klaver
> adrian.kla...@aklaver.com
>

-- 
Andreas Kretschmer - currently still (garden leave)

Technical Account Manager (TAM)
www.enterprisedb.com 




Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Atul Kumar
Please can you share any command  for due diligence whether ip is resolved
to ipv6 ?.

On Wed, Nov 22, 2023 at 11:25 PM Andreas Kretschmer 
wrote:

>
>
> Am 22.11.23 um 18:44 schrieb Atul Kumar:
> > I am giving this command
> > psql -d postgres -U postgres -p 5432 -h localhost
> > Then only I get that error.
>
> so localhost resolved to an IPv6 - address ...
>
> >
> > but when I  pass ip or hostname of the local server then I don't get
> > such error message
> > 1. psql -d postgres -U postgres -p 5432 -h 
> > 2. psql -d postgres -U postgres -p 5432 -h 
>
> resolves to an IPv4 - address. you can see the difference?
>
> localhost != iv4-address != hostname with ipv4 address
>
> Andreas
>
> >
> >
> > I don;t get that error while using the above two commands.
> >
> >
> > Regards.
> >
> >
> > On Wed, Nov 22, 2023 at 10:45 PM Adrian Klaver
> >  wrote:
> >
> > On 11/22/23 09:03, Atul Kumar wrote:
> > > The entries that I changed were to replace the md5 with
> > scram-sha-256
> > > and remove unnecessary remote IPs.
> >
> > FYI from:
> >
> > https://www.postgresql.org/docs/current/auth-password.html
> >
> > md5
> >
> >  The method md5 uses a custom less secure challenge-response
> > mechanism. It prevents password sniffing and avoids storing
> > passwords on
> > the server in plain text but provides no protection if an attacker
> > manages to steal the password hash from the server. Also, the MD5
> > hash
> > algorithm is nowadays no longer considered secure against determined
> > attacks.
> >
> >  The md5 method cannot be used with the db_user_namespace
> feature.
> >
> >  To ease transition from the md5 method to the newer SCRAM
> > method,
> > if md5 is specified as a method in pg_hba.conf but the user's
> > password
> > on the server is encrypted for SCRAM (see below), then SCRAM-based
> > authentication will automatically be chosen instead.
> >
> > >
> > > But it has nothing to do with connecting the server locally with
> > "psql
> > > -d postgres -U postgres -h localhost"
> >
> > The error:
> >
> > no pg_hba.conf entry for host "::1", user "postgres", database
> > "postgres
> >
> >
> > says it does and the error is correct as you do not have an IPv6
> > entry
> > for localhost in pg_hba.conf. At least in the snippet you showed us.
> >
> >
> > >
> > > But when I try to connect it locally I get this error. So it is
> > related
> >
> > When you say connect locally do you mean to localhost or to
> > local(socket)?
> >
> > > to local connections only and when I pass the hostname or ip of the
> > > server it works fine without any issue.
> > >
> > >
> > > Regards.
> > >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com
> >
>
> --
> Andreas Kretschmer - currently still (garden leave)
> Technical Account Manager (TAM)
> www.enterprisedb.com
>
>
>
>


Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Adrian Klaver


On 11/22/23 9:55 AM, Andreas Kretschmer wrote:



Am 22.11.23 um 18:44 schrieb Atul Kumar:

I am giving this command
psql -d postgres -U postgres -p 5432 -h localhost
Then only I get that error.


so localhost resolved to an IPv6 - address ...



Yeah, you should take a look at:

/etc/hosts


In meantime include a  line for IPv6 in pg_hba.conf. where the address 
would be:


::1/128



Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Andreas Kretschmer




Am 22.11.23 um 18:44 schrieb Atul Kumar:

I am giving this command
psql -d postgres -U postgres -p 5432 -h localhost
Then only I get that error.


so localhost resolved to an IPv6 - address ...



but when I  pass ip or hostname of the local server then I don't get 
such error message

1. psql -d postgres -U postgres -p 5432 -h 
2. psql -d postgres -U postgres -p 5432 -h 


resolves to an IPv4 - address. you can see the difference?

localhost != iv4-address != hostname with ipv4 address

Andreas




I don;t get that error while using the above two commands.


Regards.


On Wed, Nov 22, 2023 at 10:45 PM Adrian Klaver 
 wrote:


On 11/22/23 09:03, Atul Kumar wrote:
> The entries that I changed were to replace the md5 with
scram-sha-256
> and remove unnecessary remote IPs.

FYI from:

https://www.postgresql.org/docs/current/auth-password.html

md5

     The method md5 uses a custom less secure challenge-response
mechanism. It prevents password sniffing and avoids storing
passwords on
the server in plain text but provides no protection if an attacker
manages to steal the password hash from the server. Also, the MD5
hash
algorithm is nowadays no longer considered secure against determined
attacks.

     The md5 method cannot be used with the db_user_namespace feature.

     To ease transition from the md5 method to the newer SCRAM
method,
if md5 is specified as a method in pg_hba.conf but the user's
password
on the server is encrypted for SCRAM (see below), then SCRAM-based
authentication will automatically be chosen instead.

>
> But it has nothing to do with connecting the server locally with
"psql
> -d postgres -U postgres -h localhost"

The error:

no pg_hba.conf entry for host "::1", user "postgres", database
"postgres


says it does and the error is correct as you do not have an IPv6
entry
for localhost in pg_hba.conf. At least in the snippet you showed us.


>
> But when I try to connect it locally I get this error. So it is
related

When you say connect locally do you mean to localhost or to
local(socket)?

> to local connections only and when I pass the hostname or ip of the
> server it works fine without any issue.
>
>
> Regards.
>

-- 
Adrian Klaver

adrian.kla...@aklaver.com



--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Atul Kumar
I am giving this command
psql -d postgres -U postgres -p 5432 -h localhost
Then only I get that error.

but when I  pass ip or hostname of the local server then I don't get such
error message
1. psql -d postgres -U postgres -p 5432 -h 
2. psql -d postgres -U postgres -p 5432 -h 


I don;t get that error while using the above two commands.


Regards.


On Wed, Nov 22, 2023 at 10:45 PM Adrian Klaver 
wrote:

> On 11/22/23 09:03, Atul Kumar wrote:
> > The entries that I changed were to replace the md5 with scram-sha-256
> > and remove unnecessary remote IPs.
>
> FYI from:
>
> https://www.postgresql.org/docs/current/auth-password.html
>
> md5
>
>  The method md5 uses a custom less secure challenge-response
> mechanism. It prevents password sniffing and avoids storing passwords on
> the server in plain text but provides no protection if an attacker
> manages to steal the password hash from the server. Also, the MD5 hash
> algorithm is nowadays no longer considered secure against determined
> attacks.
>
>  The md5 method cannot be used with the db_user_namespace feature.
>
>  To ease transition from the md5 method to the newer SCRAM method,
> if md5 is specified as a method in pg_hba.conf but the user's password
> on the server is encrypted for SCRAM (see below), then SCRAM-based
> authentication will automatically be chosen instead.
>
> >
> > But it has nothing to do with connecting the server locally with "psql
> > -d postgres -U postgres -h localhost"
>
> The error:
>
> no pg_hba.conf entry for host "::1", user "postgres", database "postgres
>
>
> says it does and the error is correct as you do not have an IPv6 entry
> for localhost in pg_hba.conf. At least in the snippet you showed us.
>
>
> >
> > But when I try to connect it locally I get this error. So it is related
>
> When you say connect locally do you mean to localhost or to local(socket)?
>
> > to local connections only and when I pass the hostname or ip of the
> > server it works fine without any issue.
> >
> >
> > Regards.
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Removing oids with pg_repack

2023-11-22 Thread Achilleas Mantzios

Στις 22/11/23 15:14, ο/η CG έγραψε:



On Wednesday, November 22, 2023 at 01:20:18 AM EST, Achilleas Mantzios 
 wrote:



Στις 21/11/23 20:41, ο/η CG έγραψε:
I have a very large PostgreSQL 9.5 database that still has very large 
tables with oids. I'm trying to get rid of the oids with as little 
downtime as possible so I can prep the database for upgrade past 
PostgreSQL 11. I had a wild idea to mod pg_repack to write a new table 
without oids. I think it almost works.


To test out my idea I made a new table wipe_oid_test with oids. I 
filled it with a few rows of data.



But PostgreSQL still thinks that the table has oids:

mydata=# \d+ wipe_oid_test
                   Table "public.wipe_oid_test"
 Column | Type | Modifiers | Storage  | Stats target | Description
+--+---+--+--+-
 k      | text | not null  | extended |           |
 v      | text |           | extended |           |
Indexes:
    "wipe_oid_test_pkey" PRIMARY KEY, btree (k)
Has OIDs: yes
Except where does it mention in the pg_repack docs (or source) that it 
is meant to be used for NO OIDS conversion ?


It does not-- I was trying to leverage and tweak the base 
functionality of pg_repack which sets up triggers and migrates data. I 
figured if the target table was created without OIDs that when 
pg_repack did the "swap" operation that the new table would take over 
with the added bonus of not having oids.


I can modify pg_class and set relhasoids = false, but it isn't 
actually eliminating the oid column. `\d+` will report not report 
that it has oids, but the oid column is still present and returns the 
same result before updating pg_class.



Just Dont!

Noted. ;)

So I'm definitely missing something. I really need a point in the 
right direction Please help! ;)



There are a few of methods to get rid of OIDs :

- ALTER TABLE .. SET WITHOUT OIDS (just mentioning, you already 
checked that)


This makes the database unusable for hours and hours and hours because 
it locks the table entirely while it performs the operation. That's 
just something that we can't afford.


- Use table copy + use of a trigger to log changes : 
https://dba.stackexchange.com/questions/259359/eliminating-oids-while-upgrading-postgresql-from-9-4-to-12


That SO is not quite the effect I'm going for. The poster of that SO 
was using OIDS in their application and needed a solution to maintain 
those values after conversion. I simply want to eliminate them without 
the extraordinary downtime the database would experience during ALTER 
operations.
Sorry I meant this one : 
https://dba.stackexchange.com/questions/286453/stripping-oids-from-tables-in-preparation-for-pg-upgrade


- Use of Inheritance (the most neat solution I have seen, this is what 
I used for a 2TB table conversion) : 
https://www.percona.com/blog/performing-etl-using-inheritance-in-postgresql/ 



This is closest to the effect I was going for. pg_repack essentially 
creates a second table and fills it with the data from the first table 
while ensuring standard db operations against that table continue to 
function while the data is being moved from the old table to the new 
table. The process outlined in the Percona ETL strategy has to be 
repeated per-table, which is work I was hoping to avoid by leveraging 
95% of the functionality of pg_repack while supplying my own 5% as the 
resulting table would not have oids regardless of the source table's 
configuration.


For my experiment, Table A did have oids. Table B (created by 
pg_repack) did not (at least at creation). When the "swap" operation 
happened in pg_repack, the metadata for Table A was assigned to Table 
B. I'm just trying to figure out what metadata I need to change in the 
system tables to reflect the actual table structure.


I have the fallback position for the Percona ETL strategy. But I feel 
like I'm REALLY close with pg_repack and I just don't understand 
enough about the system internals to nudge it to correctness and need 
some expert assistance to tap it in the hole.

Why don't just inspect the code pg_repack ?




CG





--
Achilleas Mantzios
  IT DEV - HEAD
  IT DEPT
  Dynacom Tankers Mgmt


--
Achilleas Mantzios
 IT DEV - HEAD
 IT DEPT
 Dynacom Tankers Mgmt


Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Adrian Klaver

On 11/22/23 09:03, Atul Kumar wrote:
The entries that I changed were to replace the md5 with scram-sha-256 
and remove unnecessary remote IPs.


FYI from:

https://www.postgresql.org/docs/current/auth-password.html

md5

The method md5 uses a custom less secure challenge-response 
mechanism. It prevents password sniffing and avoids storing passwords on 
the server in plain text but provides no protection if an attacker 
manages to steal the password hash from the server. Also, the MD5 hash 
algorithm is nowadays no longer considered secure against determined 
attacks.


The md5 method cannot be used with the db_user_namespace feature.

To ease transition from the md5 method to the newer SCRAM method, 
if md5 is specified as a method in pg_hba.conf but the user's password 
on the server is encrypted for SCRAM (see below), then SCRAM-based 
authentication will automatically be chosen instead.




But it has nothing to do with connecting the server locally with "psql 
-d postgres -U postgres -h localhost"


The error:

no pg_hba.conf entry for host "::1", user "postgres", database "postgres


says it does and the error is correct as you do not have an IPv6 entry 
for localhost in pg_hba.conf. At least in the snippet you showed us.





But when I try to connect it locally I get this error. So it is related 


When you say connect locally do you mean to localhost or to local(socket)?

to local connections only and when I pass the hostname or ip of the 
server it works fine without any issue.



Regards.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Ron Johnson
The error message is EXPLICIT, and DOES NOT LIE.  Either someone removed
the ::1 entry, or you're now using IPv6.

On Wed, Nov 22, 2023 at 12:03 PM Atul Kumar  wrote:

> The entries that I changed were to replace the md5 with scram-sha-256 and
> remove unnecessary remote IPs.
>
> But it has nothing to do with connecting the server locally with "psql -d
> postgres -U postgres -h localhost"
>
> But when I try to connect it locally I get this error. So it is related to
> local connections only and when I pass the hostname or ip of the server it
> works fine without any issue.
>
>

 The entry of pg_hba.conf is like below:

 # TYPE  DATABASEUSERADDRESS METHOD



 # "local" is for Unix domain socket connections only

 local   all   all
 scram-sha-256

 # IPv4 local connections:

 hostall   postgres 127.0.0.1/32
 scram-sha-256



 What I am missing here, please suggest.

>>>
>>> A definition for host "::1", user "postgres", database "postgres".
>>> It's right there in the error message.
>>>
>>


Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Atul Kumar
The entries that I changed were to replace the md5 with scram-sha-256 and
remove unnecessary remote IPs.

But it has nothing to do with connecting the server locally with "psql -d
postgres -U postgres -h localhost"

But when I try to connect it locally I get this error. So it is related to
local connections only and when I pass the hostname or ip of the server it
works fine without any issue.


Regards.


On Wed, Nov 22, 2023 at 10:31 PM Atul Kumar  wrote:

> The entries that I changed were to replace the md5 with scram-sha-256 and
> remove unnecessary remote IPs.
>
> But it has nothing to do with connecting the server locally with "psql -d
> postgres -U postgres -h localhost"
>
> But when I try to connect it locally I get this error. So it is related to
> local connections only and when I pass the hostname or ip of the server it
> works fine without any issue.
>
>
> Regards.
>
>
> On Wed, Nov 22, 2023 at 9:55 PM Ron Johnson 
> wrote:
>
>> On Wed, Nov 22, 2023 at 11:22 AM Atul Kumar 
>> wrote:
>>
>>> Hi,
>>>
>>> I have postgres 12 running in centos 7, recently I changed the
>>> authentication of entries of pg_hba.conf to scram-sh-256 for localhost.
>>>
>>>
>> I think you changed something else, at the same time.
>>
>>
>>> Since then I have started getting the below error:
>>>
>>> no pg_hba.conf entry for host "::1", user "postgres", database "postgres
>>>
>>>
>>>
>>>
>>> The entry of pg_hba.conf is like below:
>>>
>>> # TYPE  DATABASEUSERADDRESS METHOD
>>>
>>>
>>>
>>> # "local" is for Unix domain socket connections only
>>>
>>> local   all   all
>>> scram-sha-256
>>>
>>> # IPv4 local connections:
>>>
>>> hostall   postgres 127.0.0.1/32
>>> scram-sha-256
>>>
>>>
>>>
>>> What I am missing here, please suggest.
>>>
>>
>> A definition for host "::1", user "postgres", database "postgres".  It's
>> right there in the error message.
>>
>


Re: pg_restore enhancements

2023-11-22 Thread Efrain J. Berdecia
Thanks, the issue we've run into, which I guess could be really a setup issue, 
with running a COPY command while executing pg_restore, is that if we are 
restoring a large table (bigger than 500GB) our WAL directory can grow to be 
very large.
I would think that if the pg_restore or COPY command was able to support a 
batch-size option, this should allow postgres to either archive or remove wal 
files and prevent having to re-size the WAL directory for a one time refresh 
operation.
I'm trying to gage how feasible would be to start looking at contributing to 
add such a feature to either the COPY command or pg_restore.
Thanks,Efrain J. Berdecia 

On Wednesday, November 22, 2023 at 11:37:13 AM EST, Adrian Klaver 
 wrote:  
 
 On 11/22/23 05:25, Efrain J. Berdecia wrote:
> After working for a site where we are constantly doing logical pg_dump 
> to refresh environments I've come to miss features available in other 
> RDBMS' refresh/restore utilities.
> 
> Someone could point me in the right direction otherwise, but pg_restore 
> seems to be lacking the ability to resume a restore upon failure, is all 
> or nothing with this guy. There also doesn't seem to be a way to control 
> batch size when doing the COPY phase, therefore preventing the WAL 
> directory from filling up and crashing the system.

The above needs more information on Postgres version(community or fork), 
OS and version, the size of the data set, the storage type and size, the 
Postgres conf, etc. Restores are being done all the time and this is the 
first report, as far as I can remember,  about an issue with COPY and 
WAL in a restore.


pg_restore
https://www.postgresql.org/docs/current/app-pgrestore.html

Does have:

--section=sectionname

    Only restore the named section. The section name can be pre-data, 
data, or post-data. This option can be specified more than once to 
select multiple sections. The default is to restore all sections.

    The data section contains actual table data as well as large-object 
definitions. Post-data items consist of definitions of indexes, 
triggers, rules and constraints other than validated check constraints. 
Pre-data items consist of all other data definition items.


AND

-l
--list

    List the table of contents of the archive. The output of this 
operation can be used as input to the -L option. Note that if filtering 
switches such as -n or -t are used with -l, they will restrict the items 
listed.


-L list-file
--use-list=list-file

    Restore only those archive elements that are listed in list-file, 
and restore them in the order they appear in the file. Note that if 
filtering switches such as -n or -t are used with -L, they will further 
restrict the items restored.

    list-file is normally created by editing the output of a previous 
-l operation. Lines can be moved or removed, and can also be commented 
out by placing a semicolon (;) at the start of the line. See below for 
examples.


>
> IMHO, it would be nice to have a feature that would allow pg_restore to 
> resume based on which part of the restore and/or object have already 
> been restored.
> 
> When it comes to the COPY phase of the restore, it would be nice to be 
> able to control batch size and resume COPY of a particular object upon 
> failure.

COPY as it stands now is all or none, so that command would have to be 
changed.

> 
> Thanks in advance for any suggestions or the green light to post this to 
> the PG-developer group :-)
> 
> Thanks,
> Efrain J. Berdecia

-- 
Adrian Klaver
adrian.kla...@aklaver.com

  

Re: pg_restore enhancements

2023-11-22 Thread Adrian Klaver

On 11/22/23 05:25, Efrain J. Berdecia wrote:
After working for a site where we are constantly doing logical pg_dump 
to refresh environments I've come to miss features available in other 
RDBMS' refresh/restore utilities.


Someone could point me in the right direction otherwise, but pg_restore 
seems to be lacking the ability to resume a restore upon failure, is all 
or nothing with this guy. There also doesn't seem to be a way to control 
batch size when doing the COPY phase, therefore preventing the WAL 
directory from filling up and crashing the system.


The above needs more information on Postgres version(community or fork), 
OS and version, the size of the data set, the storage type and size, the 
Postgres conf, etc. Restores are being done all the time and this is the 
first report, as far as I can remember,  about an issue with COPY and 
WAL in a restore.



pg_restore
https://www.postgresql.org/docs/current/app-pgrestore.html

Does have:

--section=sectionname

Only restore the named section. The section name can be pre-data, 
data, or post-data. This option can be specified more than once to 
select multiple sections. The default is to restore all sections.


The data section contains actual table data as well as large-object 
definitions. Post-data items consist of definitions of indexes, 
triggers, rules and constraints other than validated check constraints. 
Pre-data items consist of all other data definition items.



AND

-l
--list

List the table of contents of the archive. The output of this 
operation can be used as input to the -L option. Note that if filtering 
switches such as -n or -t are used with -l, they will restrict the items 
listed.



-L list-file
--use-list=list-file

Restore only those archive elements that are listed in list-file, 
and restore them in the order they appear in the file. Note that if 
filtering switches such as -n or -t are used with -L, they will further 
restrict the items restored.


list-file is normally created by editing the output of a previous 
-l operation. Lines can be moved or removed, and can also be commented 
out by placing a semicolon (;) at the start of the line. See below for 
examples.





IMHO, it would be nice to have a feature that would allow pg_restore to 
resume based on which part of the restore and/or object have already 
been restored.


When it comes to the COPY phase of the restore, it would be nice to be 
able to control batch size and resume COPY of a particular object upon 
failure.


COPY as it stands now is all or none, so that command would have to be 
changed.




Thanks in advance for any suggestions or the green light to post this to 
the PG-developer group :-)


Thanks,
Efrain J. Berdecia


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Ron Johnson
On Wed, Nov 22, 2023 at 11:22 AM Atul Kumar  wrote:

> Hi,
>
> I have postgres 12 running in centos 7, recently I changed the
> authentication of entries of pg_hba.conf to scram-sh-256 for localhost.
>
>
I think you changed something else, at the same time.


> Since then I have started getting the below error:
>
> no pg_hba.conf entry for host "::1", user "postgres", database "postgres
>
>
>
>
> The entry of pg_hba.conf is like below:
>
> # TYPE  DATABASEUSERADDRESS METHOD
>
>
>
> # "local" is for Unix domain socket connections only
>
> local   all   all
> scram-sha-256
>
> # IPv4 local connections:
>
> hostall   postgres 127.0.0.1/32
> scram-sha-256
>
>
>
> What I am missing here, please suggest.
>

A definition for host "::1", user "postgres", database "postgres".  It's
right there in the error message.


Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Adrian Klaver

On 11/22/23 08:21, Atul Kumar wrote:

Hi,

I have postgres 12 running in centos 7, recently I changed the 
authentication of entries of pg_hba.conf to scram-sh-256 for localhost.


Since then I have started getting the below error:

no pg_hba.conf entry for host "::1", user "postgres", database "postgres


The host is ::1 which IPv6 and your pg_hba.conf entry below is for IPv4. 
You need to add IPv6 line.







The entry of pg_hba.conf is like below:

# TYPE  DATABASE USER ADDRESS METHOD

# "local" is for Unix domain socket connections only

local all all                                          scram-sha-256

# IPv4 local connections:

host all postgres 127.0.0.1/32  scram-sha-256



What I am missing here, please suggest.




Regards,

Atul



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Andreas Kretschmer




Am 22.11.23 um 17:21 schrieb Atul Kumar:



Since then I have started getting the below error:

no pg_hba.conf entry for host "::1", user "postgres", database "postgres




What I am missing here, please suggest.




that's sounds like an issue with IPv6. Do you use it? Disable it or add 
an entry for it.



Regards, Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: General support on postgres replication

2023-11-22 Thread Ron Johnson
On Wed, Nov 22, 2023 at 11:17 AM Vijaykumar Patil <
vijaykumar.pa...@maersk.com> wrote:

> Hi Team,
>
>
>
> Need some support for below issue .
>
>
>
> I have created streaming replication with two nodes .
>
>
>
> One is primary and 2nd one is standby but after doing any DML or DDL
> operation on primary Walreceiver process is terminated on standby and
> standby database is opening in read write mode.
>
>
>
> Below are the error details .
>
>
>
> 2023-11-22 03:39:58 EST [1275942]: user=,db=,app=,client=LOG:  recovery
> stopping before commit of transaction 25627, time 2023-11-22
> 03:39:58.013764-05
>
> 2023-11-22 03:39:58 EST [1275942]: user=,db=,app=,client=LOG:  redo done
> at 1/260059E0 system usage: CPU: user: 0.02 s, system: 0.04 s, elapsed:
> 5197.90 s
>
> 2023-11-22 03:39:58 EST [1276846]: user=,db=,app=,client=FATAL:
> terminating walreceiver process due to administrator command
>
> ERROR: [037]: archive-get command requires option: pg1-path
>
>HINT: does this stanza exist?
>
> 2023-11-22 03:39:58 EST [1275942]: user=,db=,app=,client=LOG:  selected
> new timeline ID: 37
>
> ERROR: [037]: archive-get command requires option: pg1-path
>
>HINT: does this stanza exist?
>

What have you done about fixing that quite explicit ERROR message, and its
attendant HINT?


strange behavior of pg_hba.conf file

2023-11-22 Thread Atul Kumar
Hi,

I have postgres 12 running in centos 7, recently I changed the
authentication of entries of pg_hba.conf to scram-sh-256 for localhost.

Since then I have started getting the below error:

no pg_hba.conf entry for host "::1", user "postgres", database "postgres




The entry of pg_hba.conf is like below:

# TYPE  DATABASEUSERADDRESS METHOD



# "local" is for Unix domain socket connections only

local   all   all
scram-sha-256

# IPv4 local connections:

hostall   postgres 127.0.0.1/32   scram-sha-256



What I am missing here, please suggest.




Regards,

Atul


Re: Feature request: pg_get_tabledef(text)

2023-11-22 Thread Ron Johnson
On Wed, Nov 22, 2023 at 11:09 AM Laurenz Albe 
wrote:

> On Wed, 2023-11-22 at 16:41 +0100, Hans Schou wrote:
> > Similar to pg_get_viewdef() and pg_get_functiondef() it would be useful
> with a pg_get_tabledef() to get a full description of how a table is
> defined.
>
> This has been requested before:
>
> https://www.postgresql.org/message-id/flat/CAFEN2wxsDSSuOvrU03CE33ZphVLqtyh9viPp6huODCDx2UQkYA%40mail.gmail.com
>
> One of the problems is what should be included.
> Indexes?  Policies?  Constraints?
>
> Another problem is that while a function or view definition is a single
> SQL statement, a table definition could consist of more than a single
> statement, depending on the answer to the previous question.
>
> No unsurmountable questions, but someone would have to come up with a
> clear design and implement it.
>

Because you can already get all the other DDL relevant to creating a table
(like indices, primary and foreign keys, grants), ISTM that just a plain
CREATE TABLE with column DEFAULT clauses is what pg_get_tabledef() should
create.

A comprehensive "recreate everything related to a table" function should be
left as an exercise for the DBA.


Re: Feature request: pg_get_tabledef(text)

2023-11-22 Thread Erik Wienhold
On 2023-11-22 16:41 +0100, Hans Schou wrote:
> Similar to pg_get_viewdef() and pg_get_functiondef() it would be useful
> with a pg_get_tabledef() to get a full description of how a table is
> defined.

There's already a discussion on that topic[1].  But I don't know about
the current state of development.

[1] 
https://www.postgresql.org/message-id/CAFEN2wxsDSSuOvrU03CE33ZphVLqtyh9viPp6huODCDx2UQkYA%40mail.gmail.com

-- 
Erik




General support on postgres replication

2023-11-22 Thread Vijaykumar Patil
Hi Team,

Need some support for below issue .

I have created streaming replication with two nodes .

One is primary and 2nd one is standby but after doing any DML or DDL operation 
on primary Walreceiver process is terminated on standby and standby database is 
opening in read write mode.

Below are the error details .

2023-11-22 03:39:58 EST [1275942]: user=,db=,app=,client=LOG:  recovery 
stopping before commit of transaction 25627, time 2023-11-22 03:39:58.013764-05
2023-11-22 03:39:58 EST [1275942]: user=,db=,app=,client=LOG:  redo done at 
1/260059E0 system usage: CPU: user: 0.02 s, system: 0.04 s, elapsed: 5197.90 s
2023-11-22 03:39:58 EST [1276846]: user=,db=,app=,client=FATAL:  terminating 
walreceiver process due to administrator command
ERROR: [037]: archive-get command requires option: pg1-path
   HINT: does this stanza exist?
2023-11-22 03:39:58 EST [1275942]: user=,db=,app=,client=LOG:  selected new 
timeline ID: 37
ERROR: [037]: archive-get command requires option: pg1-path
   HINT: does this stanza exist?
2023-11-22 03:39:58 EST [1275942]: user=,db=,app=,client=LOG:  archive recovery 
complete
2023-11-22 03:39:58 EST [1275940]: user=,db=,app=,client=LOG:  checkpoint 
starting: end-of-recovery immediate wait
2023-11-22 03:39:58 EST [1275940]: user=,db=,app=,client=LOG:  checkpoint 
complete: wrote 6 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; 
write=0.022 s, sync=0.003 s, total=0.033 s; sync files=6, longest=0.002 s, 
average=0.001 s; distance=22 kB, estimate=34772 kB
2023-11-22 03:39:58 EST [1275938]: user=,db=,app=,client=LOG:  database system 
is ready to accept connections

Thanks & Regards
Vijaykumar
Database Operations
[cid:image001.png@01DA1D50.11779610]
Maersk Global Service Centre, Pune.




The information contained in this message is privileged and intended only for 
the recipients named. If the reader is not a representative of the intended 
recipient, any review, dissemination or copying of this message or the 
information it contains is prohibited. If you have received this message in 
error, please immediately notify the sender, and delete the original message 
and attachments.

Maersk will as part of our communication and interaction with you collect and 
process your personal data. You can read more about Maersk's collection and 
processing of your personal data and your rights as a data subject in our 
privacy policy 

Please consider the environment before printing this email.


Classification: Internal


Re: LibPQ: PQresultMemorySize as proxy to transfered bytes

2023-11-22 Thread Alvaro Herrera
On 2023-Jun-28, Dominique Devienne wrote:

> And if there's a better proxy to programmatically know the network
> traffic exchanged on the connection's socket, that's cross-platform?
> Obviously
> libpq itself knows, but I don't see any way to access that info.
> 
> Perhaps tracing might? But will that incur overhead?

Maybe you can use PQtrace()[1], yeah.  It will indicate, for each message
transmitted, its size in bytes.  Naturally there will be some overhead
in writing the trace file.  Also, the format of the output file has some
issues[2] that may make it difficult to machine-parse.  But it might be
good enough for you ... or you might find yourself motivated to fix
these problems.

[1] https://www.postgresql.org/docs/current/libpq-control.html#LIBPQ-PQTRACE
[2] 
https://www.postgr.es/m/CAFCRh-8OPoe%3D0j9v4wth7qU-x4jvjVc8DoOLV66qdnC9Do0ymw%40mail.gmail.com

-- 
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
"Doing what he did amounts to sticking his fingers under the hood of the
implementation; if he gets his fingers burnt, it's his problem."  (Tom Lane)




Configuration knobs & dials to speed up query optimization

2023-11-22 Thread Ron Johnson
Pg 9.6.24, which will change by April, but not now.

We've got some huge (2200 line long) queries that are many UNIONs
of complicated queries hitting inheritance-partitioned tables.  They can't
be refactored immediately, and maybe not at all (complicated applications
hitting normalized databases make for complicated queries).

BIND (and EXPLAIN, when I extract them from the log file and run them
myself) takes upwards of 25 seconds.  It's from JDBC connections, if that
matters.

Is there any way for me to speed that up?

The Linux system has 128GB RAM, 92% of it being "cached", according to
top(1).

I've read https://www.postgresql.org/docs/9.6/runtime-config-query.html but
can't go mucking around with big sticks on a very busy system with lots of
concurrent users.

Here are the only non-default config values which I can think of that are
relevant to the question at hand:
shared_buffers = 16GB
work_mem = 300MB
maintenance_work_mem = 12GB
effective_cache_size = 96GB
default_statistics_target = 200

Thanks


Re: Feature request: pg_get_tabledef(text)

2023-11-22 Thread Laurenz Albe
On Wed, 2023-11-22 at 16:41 +0100, Hans Schou wrote:
> Similar to pg_get_viewdef() and pg_get_functiondef() it would be useful with 
> a pg_get_tabledef() to get a full description of how a table is defined.

This has been requested before:
https://www.postgresql.org/message-id/flat/CAFEN2wxsDSSuOvrU03CE33ZphVLqtyh9viPp6huODCDx2UQkYA%40mail.gmail.com

One of the problems is what should be included.
Indexes?  Policies?  Constraints?

Another problem is that while a function or view definition is a single
SQL statement, a table definition could consist of more than a single
statement, depending on the answer to the previous question.

No unsurmountable questions, but someone would have to come up with a
clear design and implement it.

Yours,
Laurenz Albe




Re: Feature request: pg_get_tabledef(text)

2023-11-22 Thread Ron Johnson
On Wed, Nov 22, 2023 at 10:41 AM Hans Schou  wrote:

> Hi
>
> Similar to pg_get_viewdef() and pg_get_functiondef() it would be useful
> with a pg_get_tabledef() to get a full description of how a table is
> defined.
>

Because there's already pg_get_viewdef(),
pg_get_functiondef(), pg_get_constraintdef(), pg_indexes.indexdef and ways
to get all GRANT and REVOKE privs, any pg_get_tabledef() would just be the
CREATE TABLE statement without PRIMARY KEY or FOREIGN KEY clauses.


Feature request: pg_get_tabledef(text)

2023-11-22 Thread Hans Schou
Hi

Similar to pg_get_viewdef() and pg_get_functiondef() it would be useful
with a pg_get_tabledef() to get a full description of how a table is
defined.

Currently the table definition can be extracted with the command:

  pg_dump -d foo --schema-only --table=bar | egrep '^[^-]'

The psql command '\d bar' gives some of the same information but it is not
in a format where it can be used to create a table.

Extra:
With the pg_get_tabledef() function in place it is very close to be
possible to implement pg_dump() within the system. So instead of running:
  pg_dump -d foo
one could just run:
  psql -d foo -c 'SELECT pg_dump()'

The function could also be called from within a programming language like
Java/PHP.

pg_dump has a lot of options where some of them could be parameters to the
pg_dump() function. If using a cloud or other webservice this will be an
easy way to make an extra backup.

-- 
𝕳𝖆𝖓𝖘 𝕾𝖈𝖍𝖔𝖚
☏ ➁➁ ➅➃ ➇⓪ ➁⓪


Re: Removing oids with pg_repack

2023-11-22 Thread Tom Lane
CG  writes:
> I have the fallback position for the Percona ETL strategy. But I feel like 
> I'm REALLY close with pg_repack and I just don't understand enough about the 
> system internals to nudge it to correctness and need some expert assistance 
> to tap it in the hole.

The only "system metadata" that would need changing is
pg_class.relhasoids.  However, it's certainly necessary to rewrite
all the physical table rows (and hence all the table's indexes).

I don't know a lot about pg_repack, but I'm a bit skeptical that
it can really make that transparent.  I'm even more skeptical that
it could almost support it except this obvious use-case never
occurred to the developers.  I suspect that either the support
already exists but you missed it, or else there's some fundamental
stumbling block.  Maybe go discuss this with the pg_repack
developers?  (Maybe they read pgsql-general, but I wouldn't count
on it.)

regards, tom lane




Re: Removing oids with pg_repack

2023-11-22 Thread Ron Johnson
On Tue, Nov 21, 2023 at 1:43 PM CG  wrote:

> I have a very large PostgreSQL 9.5 database that still has very large
> tables with oids. I'm trying to get rid of the oids with as little downtime
> as possible so I can prep the database for upgrade past PostgreSQL 11. I
> had a wild idea to mod pg_repack to write a new table without oids. I think
> it almost works.
>

Too bad that pg_dump doesn't have an --exclude-oids option.


Re: Connection fails on one system in a address range allowed to connect

2023-11-22 Thread Tom Lane
Laurenz Albe  writes:
> On Tue, 2023-11-21 at 23:27 +, Johnson, Bruce E - (bjohnson) wrote:
>> DBI 
>> connect('dbname=webdata;host=dhbpostgres.pharmacy.arizona.edu;port=5432','trav',...)
>>  failed: FATAL:  password authentication failed for user "trav"
>> FATAL:  no pg_hba.conf entry for host "150.135.124.50", user "trav", 
>> database "webdata", no encryption at ./pg_test.pl line 8.
>> 
>> The pg_hba.conf on the server includes this which should encompass all 
>> systems in this VLAN
>> # external 'OldMTM' site range
>> hostssl all all 150.135.124.0/25 password

> "no encryption" does not match a "hostssl" entry.

Yeah.  What is probably happening here is that (with the default sslmode)
libpq is trying an SSL connection, that's failing for some reason, and
then it tries a non-SSL connection which definitely fails for lack of
a matching pg_hba.conf entry; and then for some other reason you are
only shown the message concerning the last attempt.

Theory B is that your libpq wasn't compiled with SSL support so it
skips right to the non-SSL attempt.

Laurenz's suggestion of adding sslmode=require is a good debugging
step either way, since it will either tell you for sure that you
are missing SSL support or show you the failure from the single
SSL-enabled attempt.  Alternatively, turn on log_connections and
see what the server log captures.  (You might need to do that
anyway if the client-side message isn't sufficiently informative.)

regards, tom lane




Re: pg_restore enhancements

2023-11-22 Thread Efrain J. Berdecia
Thanks, I'm trying to gage the interest on such a feature enhancement. 
Up to now I have not actively contributed to the Postgres Project but this is 
itching my rusty programming fingers lol
Thanks,Efrain J. Berdecia 

On Wednesday, November 22, 2023 at 08:28:18 AM EST, David G. Johnston 
 wrote:  
 
 On Wednesday, November 22, 2023, Efrain J. Berdecia  
wrote:


Thanks in advance for any suggestions or the green light to post this to the 
PG-developer group :-)

If you aren’t offering up a patch for these it isn’t developer material and 
belongs right here.
David J.  

Re: pg_restore enhancements

2023-11-22 Thread David G. Johnston
On Wednesday, November 22, 2023, Efrain J. Berdecia 
wrote:

>
> Thanks in advance for any suggestions or the green light to post this to
> the PG-developer group :-)
>

If you aren’t offering up a patch for these it isn’t developer material and
belongs right here.

David J.


pg_restore enhancements

2023-11-22 Thread Efrain J. Berdecia
After working for a site where we are constantly doing logical pg_dump to 
refresh environments I've come to miss features available in other RDBMS' 
refresh/restore utilities.
Someone could point me in the right direction otherwise, but pg_restore seems 
to be lacking the ability to resume a restore upon failure, is all or nothing 
with this guy. There also doesn't seem to be a way to control batch size when 
doing the COPY phase, therefore preventing the WAL directory from filling up 
and crashing the system.
IMHO, it would be nice to have a feature that would allow pg_restore to resume 
based on which part of the restore and/or object have already been restored.
When it comes to the COPY phase of the restore, it would be nice to be able to 
control batch size and resume COPY of a particular object upon failure.
Thanks in advance for any suggestions or the green light to post this to the 
PG-developer group :-)
Thanks,Efrain J. Berdecia

Re: Removing oids with pg_repack

2023-11-22 Thread CG
 

On Wednesday, November 22, 2023 at 01:20:18 AM EST, Achilleas Mantzios 
 wrote:  
 
  Στις 21/11/23 20:41, ο/η CG έγραψε:
  
 
 I have a very large PostgreSQL 9.5 database that still has very large tables 
with oids. I'm trying to get rid of the oids with as little downtime as 
possible so I can prep the database for upgrade past PostgreSQL 11. I had a 
wild idea to mod pg_repack to write a new table without oids. I think it almost 
works.  
  To test out my idea I made a new table wipe_oid_test with oids. I filled it 
with a few rows of data. 
  
  But PostgreSQL still thinks that the table has oids: 
mydata=# \d+ wipe_oid_test                    Table "public.wipe_oid_test"  
Column | Type | Modifiers | Storage  | Stats target | Description  
+--+---+--+--+-  k      | 
text | not null  | extended |              |   v      | text |           | 
extended |              |  Indexes:     "wipe_oid_test_pkey" PRIMARY KEY, btree 
(k) Has OIDs: yes
 Except where does it mention in the pg_repack docs (or source) that it is 
meant to be used for NO OIDS conversion ?
It does not-- I was trying to leverage and tweak the base functionality of 
pg_repack which sets up triggers and migrates data. I figured if the target 
table was created without OIDs that when pg_repack did the "swap" operation 
that the new table would take over with the added bonus of not having oids.
 
 I can modify pg_class and set relhasoids = false, but it isn't actually 
eliminating the oid column. `\d+` will report not report that it has oids, but 
the oid column is still present and returns the same result before updating 
pg_class. 
   
 Just Dont!
Noted. ;)
 
  So I'm definitely missing something. I really need a point in the right 
direction Please help! ;) 
   
 
There are a few of methods to get rid of OIDs :
 - ALTER TABLE .. SET WITHOUT OIDS (just mentioning, you already checked that)
 
This makes the database unusable for hours and hours and hours because it locks 
the table entirely while it performs the operation. That's just something that 
we can't afford.
 - Use table copy +  use of a trigger to log changes : 
https://dba.stackexchange.com/questions/259359/eliminating-oids-while-upgrading-postgresql-from-9-4-to-12
That SO is not quite the effect I'm going for. The poster of that SO was using 
OIDS in their application and needed a solution to maintain those values after 
conversion. I simply want to eliminate them without the extraordinary downtime 
the database would experience during ALTER operations.
 
- Use of Inheritance (the most neat solution I have seen, this is what I used 
for a 2TB table conversion) 
:https://www.percona.com/blog/performing-etl-using-inheritance-in-postgresql/
 This is closest to the effect I was going for. pg_repack essentially creates a 
second table and fills it with the data from the first table while ensuring 
standard db operations against that table continue to function while the data 
is being moved from the old table to the new table. The process outlined in the 
Percona ETL strategy has to be repeated per-table, which is work I was hoping 
to avoid by leveraging 95% of the functionality of pg_repack while supplying my 
own 5% as the resulting table would not have oids regardless of the source 
table's configuration.
For my experiment, Table A did have oids. Table B (created by pg_repack) did 
not (at least at creation). When the "swap" operation happened in pg_repack, 
the metadata for Table A was assigned to Table B. I'm just trying to figure out 
what metadata I need to change in the system tables to reflect the actual table 
structure. 
I have the fallback position for the Percona ETL strategy. But I feel like I'm 
REALLY close with pg_repack and I just don't understand enough about the system 
internals to nudge it to correctness and need some expert assistance to tap it 
in the hole.

 
  CG 
  
  
  
   
 -- 
Achilleas Mantzios
 IT DEV - HEAD
 IT DEPT
 Dynacom Tankers Mgmt   

Re: PITR

2023-11-22 Thread Andreas Kretschmer




Am 22.11.23 um 11:50 schrieb Ron Johnson:
On Wed, Nov 22, 2023 at 3:12 AM Rajesh Kumar 
 wrote:




How do I do PITR. Backup strategy is weekly full backup and daily
differential backup. Using pgbackrest.

Also. In future how do i monitor time of drop commands.



https://blog.hagander.net/locating-the-recovery-point-just-before-a-dropped-table-230/


Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: PITR

2023-11-22 Thread Ron Johnson
On Wed, Nov 22, 2023 at 3:12 AM Rajesh Kumar 
wrote:

> Hi
>
> A person dropped the table and don't know time of drop.
>

Revoke his permission to drop tables?


> How do I do PITR. Backup strategy is weekly full backup and daily
> differential backup. Using pgbackrest.
>
> Also. In future how do i monitor time of drop commands.
>

Set log_statement to 'all' or 'ddl', and then grep the log files for "DROP
TABLE ".


Re: Connection fails on one system in a address range allowed to connect

2023-11-22 Thread Laurenz Albe
On Tue, 2023-11-21 at 23:27 +, Johnson, Bruce E - (bjohnson) wrote:
> DBI 
> connect('dbname=webdata;host=dhbpostgres.pharmacy.arizona.edu;port=5432','trav',...)
>  failed: FATAL:  password authentication failed for user "trav"
> FATAL:  no pg_hba.conf entry for host "150.135.124.50", user "trav", database 
> "webdata", no encryption at ./pg_test.pl line 8.
> 
> The pg_hba.conf on the server includes this which should encompass all 
> systems in this VLAN
> 
> # external 'OldMTM' site range
> hostssl all all 150.135.124.0/25 password
> 
> Anywhere else that I should look for a cause?

"no encryption" does not match a "hostssl" entry.

Either add "sslmode=require" to the connection string, or use a "hostnossl" 
entry.

Yours,
Laurenz Albe




Re: Connection fails on one system in a address range allowed to connect

2023-11-22 Thread Inzamam Shafiq
you need to allow connection from this IP in your pg_hba file and reload the 
configurations.

From: Johnson, Bruce E - (bjohnson) 
Sent: Wednesday, November 22, 2023 4:27 AM
To: pgsql-general@lists.postgresql.org 
Subject: Connection fails on one system in a address range allowed to connect

I am migrating an existing web application from Oracle to postgres and I’m 
testing the connectivity.

Trying to run a test program (that works on another system in the same subnet!) 
I get this error:

Error system:

[root@dhbroomscheduling4 ~]# ./pg_test.pl
DBI 
connect('dbname=webdata;host=dhbpostgres.pharmacy.arizona.edu;port=5432','trav',...)
 failed: FATAL:  password authentication failed for user "trav"
FATAL:  no pg_hba.conf entry for host "150.135.124.50", user "trav", database 
"webdata", no encryption at ./pg_test.pl line 8.

Working system:

[root@avipg perl]# ./pg_test.pl
Sector Alpha Crucis has 44 worlds
Sector Antares has 37 worlds
Sector Core has 221 worlds …

(The test dataset is a collection of mapping data for an old RPG game)

Note the pg_test.pl script was copied from the working server to the non 
working one.

The pg_hba.conf on the server includes this which should encompass all systems 
in this VLAN

# external 'OldMTM' site range
hostssl all all 150.135.124.0/25 password

Another system in the same address range is working just fine with the 
identical setup, in fact it’s in production without issues.

Both systems are running Rocky Linux 9, using the perl DBI interface with 
DBD::Pg all installed from the Rocky Linux repositories.

Firewall settings, Perl version, env variables etc are the same on both client 
hosts

I know the password is correct because I just successfully logged in on the 
server with psql -U trav -d webdata -W  and used the password in the connect 
statement in the script.

Anywhere else that I should look for a cause?


--
Bruce Johnson
University of Arizona
College of Pharmacy
Information Technology Group

Institutions do not have opinions, merely customs




Connection fails on one system in a address range allowed to connect

2023-11-22 Thread Johnson, Bruce E - (bjohnson)
I am migrating an existing web application from Oracle to postgres and I’m 
testing the connectivity.

Trying to run a test program (that works on another system in the same subnet!) 
I get this error:

Error system:

[root@dhbroomscheduling4 ~]# ./pg_test.pl 
DBI 
connect('dbname=webdata;host=dhbpostgres.pharmacy.arizona.edu;port=5432','trav',...)
 failed: FATAL:  password authentication failed for user "trav"
FATAL:  no pg_hba.conf entry for host "150.135.124.50", user "trav", database 
"webdata", no encryption at ./pg_test.pl line 8.

Working system:

[root@avipg perl]# ./pg_test.pl
Sector Alpha Crucis has 44 worlds  
Sector Antares has 37 worlds  
Sector Core has 221 worlds …

(The test dataset is a collection of mapping data for an old RPG game) 

Note the pg_test.pl script was copied from the working server to the non 
working one.

The pg_hba.conf on the server includes this which should encompass all systems 
in this VLAN

# external 'OldMTM' site range
hostssl all all 150.135.124.0/25 password

Another system in the same address range is working just fine with the 
identical setup, in fact it’s in production without issues. 

Both systems are running Rocky Linux 9, using the perl DBI interface with 
DBD::Pg all installed from the Rocky Linux repositories.

Firewall settings, Perl version, env variables etc are the same on both client 
hosts

I know the password is correct because I just successfully logged in on the 
server with psql -U trav -d webdata -W  and used the password in the connect 
statement in the script.

Anywhere else that I should look for a cause?


-- 
Bruce Johnson
University of Arizona
College of Pharmacy
Information Technology Group

Institutions do not have opinions, merely customs