Re: Oracle to Postgres - Transform Hash Partition

2024-06-06 Thread Muhammad Ikram
Hi David,

Here is another approach. See if it serves your purpose

postgres=# create schema idev;
CREATE SCHEMA
postgres=#
postgres=#
postgres=# CREATE TABLE idev.assessment_result_2023_dab (

district_oid int,
-- other columns go here
column1 numeric,
column2 numeric
);
CREATE TABLE
postgres=#
postgres=#
CREATE TABLE idev.assessment_result_2023_dab_part (
LIKE idev.assessment_result_2023_dab
) PARTITION BY HASH (district_oid);

CREATE TABLE idev.assessment_result_2023_dab_part_2023_p1 PARTITION OF
idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 0);

CREATE TABLE idev.assessment_result_2023_dab_part_2023_p2 PARTITION OF
idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 1);

CREATE TABLE idev.assessment_result_2023_dab_part_2023_p3 PARTITION OF
idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 2);

CREATE TABLE idev.assessment_result_2023_dab_part_2023_p4 PARTITION OF
idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 3);

CREATE TABLE idev.assessment_result_2023_dab_part_2023_p5 PARTITION OF
idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 4);

CREATE TABLE idev.assessment_result_2023_dab_part_2023_p6 PARTITION OF
idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 5);

CREATE TABLE idev.assessment_result_2023_dab_part_2023_p7 PARTITION OF
idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 6);

CREATE TABLE idev.assessment_result_2023_dab_part_2023_p8 PARTITION OF
idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 7);
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
postgres=#
postgres=#
postgres=# CREATE TABLE idev.temp_assessment_result_2023_dab AS TABLE
idev.assessment_result_2023_dab WITH NO DATA;
CREATE TABLE AS
postgres=#
postgres=#
postgres=# INSERT INTO idev.temp_assessment_result_2023_dab
SELECT * FROM idev.assessment_result_2023_dab;
INSERT 0 0
postgres=#
postgres=#
postgres=# TRUNCATE TABLE idev.assessment_result_2023_dab;
TRUNCATE TABLE
postgres=#
postgres=# ALTER TABLE idev.assessment_result_2023_dab_part
ATTACH PARTITION idev.assessment_result_2023_dab
FOR VALUES WITH (modulus 64, remainder 8);
ALTER TABLE
postgres=#
postgres=#
postgres=# INSERT INTO idev.assessment_result_2023_dab_part
SELECT * FROM idev.temp_assessment_result_2023_dab;
INSERT 0 0
postgres=#
postgres=# DROP TABLE idev.temp_assessment_result_2023_dab;
DROP TABLE
postgres=#
postgres=#


Regrads,
Muhammad Ikram
bitnine


On Thu, Jun 6, 2024 at 11:41 PM Christoph Moench-Tegeder 
wrote:

> ## David Barbour (dbarb...@istation.com):
>
> > Now I need to 'attach' the original table.  The problem I'm running into
> is
> > there are no good examples of how to define the values.
>
> The syntax is the same as with CREATE TABLE ... PARTITION OF, e.g.
> ALTER TABLE parent ATTACH TABLE part FOR VALUES WITH (MODULUS m, REMAINDER
> r);
>
> > Here's an example:
> >  alter table idev.assessment_result_2023_dab_part
> > attach partition idev.assessment_result_2023_dab for values with(modulus
> 8,
> > remainder 1) to (modulus 8, remainder 7)
>
> There's only one (modulus, remainder) tuple in the partition bound
> definition for hash partitions, and always only one partition bound
> specification per partition.
> Maybe what you actually want is a DEFAULT partition (specified as
> PARTITION OF parent DEFAULT), or maybe a completely different approach?
>
> Regards,
> Christoph
>
> --
> Spare Space.
>
>
>

-- 
Muhammad Ikram


Re: PG 14 pg_basebackup accepts --compress=server-zst option

2024-06-06 Thread David G. Johnston
On Thursday, June 6, 2024, Kashif Zeeshan  wrote:

> Hi
>
> On Fri, Jun 7, 2024 at 6:54 AM Ron Johnson 
> wrote:
>
>>
>> https://www.postgresql.org/docs/14/app-pgbasebackup.html doesn't mention
>> "--compress=[{client|server}-]method".  That first appears in the v15
>> docs.
>>
>> And yet pg_basebackup doesn't complain about an invalid option.
>> (Technically, this is a bug; I first noticed it a week after copying a
>> script from a PG 15 server to five PG 14 servers, and running it quite a
>> few times without fail.)
>>
>
Seems a bit suspect, but as your script doesn’t mention tar the option
itself is apparently ignored, I guess silently.  Assuming this isn’t an
actual regression in behavior in a patch-released older version I don’t see
us adding an error message at this point.


> If the support is removed then it should be mentioned in the official
> documentation.
>

Support wasn’t removed.  Re-read the email and check the version/times
being mentioned again.

David J.


Re: PG 14 pg_basebackup accepts --compress=server-zst option

2024-06-06 Thread Kashif Zeeshan
Hi

On Fri, Jun 7, 2024 at 6:54 AM Ron Johnson  wrote:

>
> https://www.postgresql.org/docs/14/app-pgbasebackup.html doesn't mention
> "--compress=[{client|server}-]method".  That first appears in the v15 docs.
>
> And yet pg_basebackup doesn't complain about an invalid option.
> (Technically, this is a bug; I first noticed it a week after copying a
> script from a PG 15 server to five PG 14 servers, and running it quite a
> few times without fail.)
>
If the support is removed then it should be mentioned in the official
documentation.

Regards
Kashif Zeeshan
Bitnine Global

>
> $ pg_basebackup \
> > --pgdata=$PGDATA \
> > --dbname=service=basebackup \
> > --verbose --progress \
> > --checkpoint=fast \
> > --write-recovery-conf \
> > --wal-method=stream \
> > --create-slot --slot=pgstandby1 \
> > --compress=server-zst ; echo $?
> pg_basebackup: initiating base backup, waiting for checkpoint to complete
> pg_basebackup: checkpoint completed
> pg_basebackup: write-ahead log start point: 256/BC28 on timeline 1
> pg_basebackup: starting background WAL receiver
> pg_basebackup: created replication slot "pgstandby1"
> 42567083/42567083 kB (100%), 1/1 tablespace
> pg_basebackup: write-ahead log end point: 256/BC000138
> pg_basebackup: waiting for background process to finish streaming ...
> pg_basebackup: syncing data to disk ...
> pg_basebackup: renaming backup_manifest.tmp to backup_manifest
> pg_basebackup: base backup completed
> 0
>
>


Re: Questions on logical replication

2024-06-06 Thread Kashif Zeeshan
On Fri, Jun 7, 2024 at 3:19 AM Koen De Groote  wrote:

> I'll give them a read, though it might take a few weekends
>
> Meanwhile, this seems to be what I'm looking for:
>
> From
> https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS
>
> " Replication slots provide an automated way to ensure that the primary
> does not remove WAL segments until they have been received by all standbys,
> and that the primary does not remove rows which could cause a recovery
> conflict
> 
> even when the standby is disconnected."
>
> I'm reading that as: "if there is a replication slot, if the standby is
> disconnected, WAL is kept"
>
> And if we know WAL is kept in the "pg_wal" directory, that sounds like it
> could slowly but surely fill up disk space.
>

Hi

Yes that is a consideration with logical replication but the possible cast
out weight the benefit.
The kept WAL file size will only increase if the standby is offline.

Regards
Kashif Zeeshan
Bitnine Global

>
>
> But again, I'll give them a read. I've read all of logical replication
> already, and I feel like I didn't get my answer there.
>
> Thanks for the help
>
>
> Regards,
> Koen De Groote
>
> On Thu, Jun 6, 2024 at 12:19 AM Adrian Klaver 
> wrote:
>
>> On 6/5/24 14:54, Koen De Groote wrote:
>> > https://www.postgresql.org/docs/current/wal-configuration.html
>> > 
>> >
>> > "Checkpoints are points in the sequence of transactions at which it
>> is
>> > guaranteed that the heap and index data files have been updated with
>> > all
>> > information written before that checkpoint. At checkpoint time, all
>> > dirty data pages are flushed to disk and a special checkpoint
>> record is
>> > written to the WAL file. (The change records were previously
>> flushed to
>> > the WAL files.) In the event of a crash, the crash recovery
>> procedure
>> > looks at the latest checkpoint record to determine the point in the
>> WAL
>> > (known as the redo record) from which it should start the REDO
>> > operation. Any changes made to data files before that point are
>> > guaranteed to be already on disk. Hence, after a checkpoint, WAL
>> > segments preceding the one containing the redo record are no longer
>> > needed and can be recycled or removed. (When WAL archiving is being
>> > done, the WAL segments must be archived before being recycled or
>> > removed.)"
>> >
>> >
>> > And this is the same for logical replication and physical replication,
>> I
>> > take it.
>>
>> High level explanation, both physical and logical replication use the
>> WAL files as the starting point. When the recycling is done is dependent
>> on various factors. My suggestion would be to read through the below to
>> get a better idea of what is going. There is a lot to cover, but if you
>> really want to understand it you will need to go through it.
>>
>> Physical replication
>>
>> https://www.postgresql.org/docs/current/high-availability.html
>>
>> 27.2.5. Streaming Replication
>> 27.2.6. Replication Slots
>>
>> Logical replication
>>
>> https://www.postgresql.org/docs/current/logical-replication.html
>>
>> WAL
>>
>> https://www.postgresql.org/docs/current/wal.html
>>
>>
>>
>> >
>> > Thus, if a leader has a standby of the same version, and meanwhile
>> > logical replication is being done to a newer version, both those
>> > replications are taken into account, is that correct?
>>
>> Yes, see links above.
>>
>>
>> > And if it cannot sync them, due to connectivity loss for instance, the
>> > WAL records will not be removed, then?
>>
>> Depends on the type of replication being done. It is possible for
>> physical replication to have WAL records removed that are still needed
>> downstream.
>>
>> From
>>
>>
>> https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION
>>
>> "If you use streaming replication without file-based continuous
>> archiving, the server might recycle old WAL segments before the standby
>> has received them. If this occurs, the standby will need to be
>> reinitialized from a new base backup. You can avoid this by setting
>> wal_keep_size to a value large enough to ensure that WAL segments are
>> not recycled too early, or by configuring a replication slot for the
>> standby. If you set up a WAL archive that's accessible from the standby,
>> these solutions are not required, since the standby can always use the
>> archive to catch up provided it retains enough segments."
>>
>> This is why it is good idea to go through the links I posted above.
>>
>> >
>> > Regards,
>> > Koen De Groote
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>


PG 14 pg_basebackup accepts --compress=server-zst option

2024-06-06 Thread Ron Johnson
https://www.postgresql.org/docs/14/app-pgbasebackup.html doesn't mention
"--compress=[{client|server}-]method".  That first appears in the v15 docs.

And yet pg_basebackup doesn't complain about an invalid option.
(Technically, this is a bug; I first noticed it a week after copying a
script from a PG 15 server to five PG 14 servers, and running it quite a
few times without fail.)

$ pg_basebackup \
> --pgdata=$PGDATA \
> --dbname=service=basebackup \
> --verbose --progress \
> --checkpoint=fast \
> --write-recovery-conf \
> --wal-method=stream \
> --create-slot --slot=pgstandby1 \
> --compress=server-zst ; echo $?
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 256/BC28 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created replication slot "pgstandby1"
42567083/42567083 kB (100%), 1/1 tablespace
pg_basebackup: write-ahead log end point: 256/BC000138
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: syncing data to disk ...
pg_basebackup: renaming backup_manifest.tmp to backup_manifest
pg_basebackup: base backup completed
0


Re: Questions on logical replication

2024-06-06 Thread Koen De Groote
I'll give them a read, though it might take a few weekends

Meanwhile, this seems to be what I'm looking for:

From
https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS

" Replication slots provide an automated way to ensure that the primary
does not remove WAL segments until they have been received by all standbys,
and that the primary does not remove rows which could cause a recovery
conflict

even when the standby is disconnected."

I'm reading that as: "if there is a replication slot, if the standby is
disconnected, WAL is kept"

And if we know WAL is kept in the "pg_wal" directory, that sounds like it
could slowly but surely fill up disk space.


But again, I'll give them a read. I've read all of logical replication
already, and I feel like I didn't get my answer there.

Thanks for the help


Regards,
Koen De Groote

On Thu, Jun 6, 2024 at 12:19 AM Adrian Klaver 
wrote:

> On 6/5/24 14:54, Koen De Groote wrote:
> > https://www.postgresql.org/docs/current/wal-configuration.html
> > 
> >
> > "Checkpoints are points in the sequence of transactions at which it
> is
> > guaranteed that the heap and index data files have been updated with
> > all
> > information written before that checkpoint. At checkpoint time, all
> > dirty data pages are flushed to disk and a special checkpoint record
> is
> > written to the WAL file. (The change records were previously flushed
> to
> > the WAL files.) In the event of a crash, the crash recovery procedure
> > looks at the latest checkpoint record to determine the point in the
> WAL
> > (known as the redo record) from which it should start the REDO
> > operation. Any changes made to data files before that point are
> > guaranteed to be already on disk. Hence, after a checkpoint, WAL
> > segments preceding the one containing the redo record are no longer
> > needed and can be recycled or removed. (When WAL archiving is being
> > done, the WAL segments must be archived before being recycled or
> > removed.)"
> >
> >
> > And this is the same for logical replication and physical replication, I
> > take it.
>
> High level explanation, both physical and logical replication use the
> WAL files as the starting point. When the recycling is done is dependent
> on various factors. My suggestion would be to read through the below to
> get a better idea of what is going. There is a lot to cover, but if you
> really want to understand it you will need to go through it.
>
> Physical replication
>
> https://www.postgresql.org/docs/current/high-availability.html
>
> 27.2.5. Streaming Replication
> 27.2.6. Replication Slots
>
> Logical replication
>
> https://www.postgresql.org/docs/current/logical-replication.html
>
> WAL
>
> https://www.postgresql.org/docs/current/wal.html
>
>
>
> >
> > Thus, if a leader has a standby of the same version, and meanwhile
> > logical replication is being done to a newer version, both those
> > replications are taken into account, is that correct?
>
> Yes, see links above.
>
>
> > And if it cannot sync them, due to connectivity loss for instance, the
> > WAL records will not be removed, then?
>
> Depends on the type of replication being done. It is possible for
> physical replication to have WAL records removed that are still needed
> downstream.
>
> From
>
>
> https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION
>
> "If you use streaming replication without file-based continuous
> archiving, the server might recycle old WAL segments before the standby
> has received them. If this occurs, the standby will need to be
> reinitialized from a new base backup. You can avoid this by setting
> wal_keep_size to a value large enough to ensure that WAL segments are
> not recycled too early, or by configuring a replication slot for the
> standby. If you set up a WAL archive that's accessible from the standby,
> these solutions are not required, since the standby can always use the
> archive to catch up provided it retains enough segments."
>
> This is why it is good idea to go through the links I posted above.
>
> >
> > Regards,
> > Koen De Groote
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Can't Remote connection by IpV6

2024-06-06 Thread Adrian Klaver

On 6/6/24 10:37, Marcelo Marloch wrote:

Reply to list also.
Ccing list

Hi Adrian thanks for your help, Server is running on windows and I set 
open 5432 port on windows firewall, I did not see any options regarding 
if ipv4 or v6. client also uses windows  and connects through odbc 
driver. over ipv4 it connects flawlessly if it got a public ipv4 but 
when I set odbc driver server field to an Ipv6 it shows message: 
connection to server at (ipv6 ip), port 5432 failed: Connection time 
out(0x274C/10060) is the server running on that host and accepting 
TCP/IP connections? the same message is shown if provider sets to me a 
not public ipv4


To be clear the Postgres server is running on a Windows instance that is 
itself running on a cloud(?) provider. Seems to me this still points at 
a firewall, namely one the provider has set up in front of your Windows 
instance. The hint would be the public IPv4 address working vs the 
private IPv4 address not working. There is also the issue of whether the 
IP addresses that are not on the public side of the internet are being 
drawn from IP private network ranges(see 
https://en.wikipedia.org/wiki/IP_address) that are not publishable to 
the public internet. You may need to talk with the provider and see how 
they suggest you punch a hole through to your Postgres server.




thank you very much!

Em qui., 6 de jun. de 2024 às 12:03, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> escreveu:


On 6/6/24 07:46, Marcelo Marloch wrote:
 > Hi everyone, is it possible to remote connect through IpV6? IpV4
works
 > fine but I cant connect through V6
 >
 > postgresql.conf is to listen all address and pg_hba.conf is set with
 > host all all :: md5 i've tried ::/0 and ::0/0 but had no success

Is the firewall open for IPv6 connections to the Postgres port?

Did you reload the server after making the configuration changes?

Define in more detail what 'no success' means.

 >
 > my provider is out of ipv4 and they're sending ips by cgnat if I
want a
 > public ipv4 I have to sign a very expensive service fee
 >
 > thanks a lot

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Can't Remote connection by IpV6

2024-06-06 Thread Alan Hodgson
On Thu, 2024-06-06 at 11:46 -0300, Marcelo Marloch wrote:
> Hi everyone, is it possible to remote connect through IpV6? IpV4
> works fine but I cant connect through V6 
> 
> postgresql.conf is to listen all address and pg_hba.conf is set
> with host all all :: md5 i've tried ::/0 and ::0/0 but had no
> success
> 
> my provider is out of ipv4 and they're sending ips by cgnat if I
> want a public ipv4 I have to sign a very expensive service fee  
> 
> thanks a lot

listen '*'

or listen '::' just for ipv6.

Remember to adjust pg_hba.conf as well.


Re: Oracle to Postgres - Transform Hash Partition

2024-06-06 Thread Christoph Moench-Tegeder
## David Barbour (dbarb...@istation.com):

> Now I need to 'attach' the original table.  The problem I'm running into is
> there are no good examples of how to define the values.

The syntax is the same as with CREATE TABLE ... PARTITION OF, e.g.
ALTER TABLE parent ATTACH TABLE part FOR VALUES WITH (MODULUS m, REMAINDER r);

> Here's an example:
>  alter table idev.assessment_result_2023_dab_part
> attach partition idev.assessment_result_2023_dab for values with(modulus 8,
> remainder 1) to (modulus 8, remainder 7)

There's only one (modulus, remainder) tuple in the partition bound
definition for hash partitions, and always only one partition bound
specification per partition.
Maybe what you actually want is a DEFAULT partition (specified as
PARTITION OF parent DEFAULT), or maybe a completely different approach?

Regards,
Christoph

-- 
Spare Space.




Tables get stuck at srsubstate = f

2024-06-06 Thread Avi Weinberg
Hi all,

I'm using logical replication with Postgres 15.2.

When creating logical replication on multiple tables, sometimes the initial 
sync get stuck for few tables at state srsubstate = f.  If I recreate the 
logical replication again, it may get stuck at srsubstate = f  for other 
tables, so it does not seems to be table specific.  If course, in the majority 
of cases, building the logical replication succeed and all tables are showing 
srsubstate = r.

What might be the reason that logical replication get stuck during initial sync 
at srsubstate =f?
We use this view pg_stat_progress_copy to monitor initial sync.  Is it possible 
that calling this frequently during initial sync causes some lock not to be 
released?
Is it a Postgres bug?

Thanks!

IMPORTANT - This email and any attachments is intended for the above named 
addressee(s), and may contain information which is confidential or privileged. 
If you are not the intended recipient, please inform the sender immediately and 
delete this email: you should not copy or use this e-mail for any purpose nor 
disclose its contents to any person.


Oracle to Postgres - Transform Hash Partition

2024-06-06 Thread David Barbour
Hi,

New to this list and to Postgres.  Been an Oracle DBA for quite a while and
we're moving from Oracle to Postgres.

I have a table that I need to transform into a hash partitioned table.
I've gone through all the steps to create a hash partitioned table, with 8
partitions such as

create table idev.assessment_result_2023_dab_part (like
idev.assessment_result_2023_dab)partition by hash (district_oid);

CREATE TABLE idev.assessment_result_2023_dab_part_assessment_result_2023_p1
PARTITION OF idev.assessment_result_2023_dab_part
FOR VALUES WITH (modulus 64, remainder 0)

etc through partition 2023_p8 FOR VALUES WITH (modulus 64, remainder 7)

Now I need to 'attach' the original table.  The problem I'm running into is
there are no good examples of how to define the values.

I've tried several iterations of various 'for values', 'values', 'for
values with', etc. but they all error out.

Here's an example:
 alter table idev.assessment_result_2023_dab_part
attach partition idev.assessment_result_2023_dab for values with(modulus 8,
remainder 1) to (modulus 8, remainder 7)

ERROR: syntax error at or near "to" LINE 2: ..._2023_dab for values
with(modulus 8, remainder 1) to (modulu...

Any assistance would be appreciated.
-- 

*David A. Barbour*

*dbarb...@istation.com *

*(214) 292-4096*

Istation

8150 North Central Expressway, Suite 2000

Dallas, TX 75206

www.Istation.com 



CONFIDENTIALITY / PROPRIETARY NOTICE:

The information contained in this e-mail, including any attachment(s), is
confidential information that may be privileged and exempt from disclosure
under applicable law. If the reader of this message is not the intended
recipient, or if you received this message in error, then any direct or
indirect disclosure, distribution or copying of this message is strictly
prohibited. If you have received this message in error, please notify
Istation by calling 866-883-7323 immediately and by sending a return
e-mail; delete this message; and destroy all copies, including attachments.

Thank you.


Re: Logical replication type- WAL recovery fails and changes the size of wal segment in archivedir

2024-06-06 Thread Ron Johnson
On Wed, Jun 5, 2024 at 6:26 AM Laurenz Albe 
wrote:

> On Wed, 2024-06-05 at 06:36 +, Meera Nair wrote:
> > 2024-06-05 11:41:32.369 IST [54369] LOG:  restored log file
> "00050001006A" from archive
> > 2024-06-05 11:41:33.112 IST [54369] LOG:  restored log file
> "00050001006B" from archive
> > cp: cannot stat ‘/home/pgsql/wmaster/00050001006C’: No such
> file or directory
> > 2024-06-05 11:41:33.167 IST [54369] LOG:  redo done at 1/6B000100
> > 2024-06-05 11:41:33.172 IST [54369] FATAL:  archive file
> "00050001006B" has wrong size: 0 instead of 16777216
> > 2024-06-05 11:41:33.173 IST [54367] LOG:  startup process (PID 54369)
> exited with exit code 1
> > 2024-06-05 11:41:33.173 IST [54367] LOG:  terminating any other active
> server processes
> > 2024-06-05 11:41:33.174 IST [54375] FATAL:  archive command was
> terminated by signal 3: Quit
> > 2024-06-05 11:41:33.174 IST [54375] DETAIL:  The failed archive command
> was: cp pg_wal/00050001006B
> /home/pgsql/wmaster/00050001006B
> > 2024-06-05 11:41:33.175 IST [54367] LOG:  archiver process (PID 54375)
> exited with exit code 1
> > 2024-06-05 11:41:33.177 IST [54367] LOG:  database system is shut down
> >
> > Here ‘/home/pgsql/wmaster’ is my archivedir (the folder where WAL
> segments are restored from)
> >
> > Before attempting start, size of
> > 00050001006B file was 16 MB.
> > After failing to detect 00050001006C, there is a FATAL error
> saying wrong size for 00050001006B
> > Now the size of 00050001006B is observed as 2 MB. Size of
> all other WAL segments remain 16 MB.
> >
> > -rw--- 1 postgres postgres  2359296 Jun  5 11:34
> 00050001006B
>
> That looks like you have "archive_mode = always", and "archive_command"
> writes
> back to the archive.  Don't do that.
>

In fact, don't write your own PITR backup process.  Use something like
PgBackRest or BarMan.


Re: Can't Remote connection by IpV6

2024-06-06 Thread Ron Johnson
On Thu, Jun 6, 2024 at 11:03 AM Adrian Klaver 
wrote:

> On 6/6/24 07:46, Marcelo Marloch wrote:
> > Hi everyone, is it possible to remote connect through IpV6? IpV4 works
> > fine but I cant connect through V6
> >
> > postgresql.conf is to listen all address and pg_hba.conf is set with
> > host all all :: md5 i've tried ::/0 and ::0/0 but had no success
>
> Is the firewall open for IPv6 connections to the Postgres port?
>

netcat (comes with nmap) is great for this.  There's a Windows client, too.


Re: Can't Remote connection by IpV6

2024-06-06 Thread Adrian Klaver

On 6/6/24 07:46, Marcelo Marloch wrote:
Hi everyone, is it possible to remote connect through IpV6? IpV4 works 
fine but I cant connect through V6


postgresql.conf is to listen all address and pg_hba.conf is set with 
host all all :: md5 i've tried ::/0 and ::0/0 but had no success


Is the firewall open for IPv6 connections to the Postgres port?

Did you reload the server after making the configuration changes?

Define in more detail what 'no success' means.



my provider is out of ipv4 and they're sending ips by cgnat if I want a 
public ipv4 I have to sign a very expensive service fee


thanks a lot


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Can't Remote connection by IpV6

2024-06-06 Thread Kashif Zeeshan
Hi Marcelo

Yes it's possible to connect remotely with postgres with ipv6 e.g. with
psql.
If you are facing issues then you need to make sure that you have done the
required configurations.

Please refer to the following links for more details.

https://dba.stackexchange.com/questions/148086/how-do-you-use-psql-client-to-connect-to-a-postgresql-ipv6-host
https://www.highgo.ca/2020/08/21/how-to-setup-postgresql-on-an-ipv6-enabled-network/
https://www.bigbinary.com/blog/configure-postgresql-to-allow-remote-connection
https://www.postgresql.org/docs/8.1/client-authentication.html

Regards
Kashif Zeeshan
Bitnine Global

On Thu, Jun 6, 2024 at 7:46 PM Marcelo Marloch  wrote:

> Hi everyone, is it possible to remote connect through IpV6? IpV4 works
> fine but I cant connect through V6
>
> postgresql.conf is to listen all address and pg_hba.conf is set with host
> all all :: md5 i've tried ::/0 and ::0/0 but had no success
>
> my provider is out of ipv4 and they're sending ips by cgnat if I want a
> public ipv4 I have to sign a very expensive service fee
>
> thanks a lot
>


Can't Remote connection by IpV6

2024-06-06 Thread Marcelo Marloch
Hi everyone, is it possible to remote connect through IpV6? IpV4 works fine
but I cant connect through V6

postgresql.conf is to listen all address and pg_hba.conf is set with host
all all :: md5 i've tried ::/0 and ::0/0 but had no success

my provider is out of ipv4 and they're sending ips by cgnat if I want a
public ipv4 I have to sign a very expensive service fee

thanks a lot


Re: how to tell if a pg version supports a linux distribution

2024-06-06 Thread Laurenz Albe
On Wed, 2024-06-05 at 16:24 -0400, bruno vieira da silva wrote:
> Hello, if a pg version has been tested on the buildfarm but the pg yum
> repository doesn't have packages for a linux distribution that means
> that distribution isn't supported by pg? how can I find if linux
> distributions for a pg version have regression tests executed against.
> 
> e.g. : postgresql 16 doesn't have packages on yum for centos 7 but I
> can find tests on the buildfarm for it.
> 
> https://buildfarm.postgresql.org/index.html

PostgreSQL supports all Linux distributions.  It doesn't particularly
care about the distribution as long as all the required software is
installed (https://www.postgresql.org/docs/current/install-requirements.html).

There are certainly other considerations.  For example, if your Linux
distribution uses musl as C library, which has dysfunctional collation
support, then collations won't work in PostgreSQL either, since it uses
that functionality by default.

If you want to avoid surprises, it might be good to use widely-used
distributions, but that doesn't mean that other distributions are not
supported.

Packaging is a completely different affair.  PostgreSQL provides
binary packages for the distributions that a packager cares about.
If Devrim decides that he doesn't want to build packages for v16
for a crummy old CentOS release, that's his choice.

Yours,
Laurenz Albe