Re: WIP/PoC for parallel backup

2020-04-02 Thread Kashif Zeeshan
Hi Asif

The backup failed with errors "error: could not connect to server: could
not look up local user ID 1000: Too many open files" when the
max_wal_senders was set to 2000.
The errors generated for the workers starting from backup worke=1017.
Please note that the backup directory was also not cleaned after the backup
was failed.


Steps
===
1) Generate data in DB
 ./pgbench -i -s 600 -h localhost  -p 5432 postgres
2) Set max_wal_senders = 2000 in postgresql.
3) Generate the backup


[edb@localhost bin]$
^[[A[edb@localhost bin]$
[edb@localhost bin]$ ./pg_basebackup -v -j 1990 -D
 /home/edb/Desktop/backup/
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 1/F128 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_58692"
pg_basebackup: backup worker (0) created
….
…..
…..
pg_basebackup: backup worker (1017) created
pg_basebackup: error: could not connect to server: could not look up local
user ID 1000: Too many open files
pg_basebackup: backup worker (1018) created
pg_basebackup: error: could not connect to server: could not look up local
user ID 1000: Too many open files
…
…
…
pg_basebackup: error: could not connect to server: could not look up local
user ID 1000: Too many open files
pg_basebackup: backup worker (1989) created
pg_basebackup: error: could not create file
"/home/edb/Desktop/backup//global/4183": Too many open files
pg_basebackup: error: could not create file
"/home/edb/Desktop/backup//global/3592": Too many open files
pg_basebackup: error: could not create file
"/home/edb/Desktop/backup//global/4177": Too many open files
[edb@localhost bin]$


4) The backup directory is not cleaned


[edb@localhost bin]$
[edb@localhost bin]$ ls  /home/edb/Desktop/backup
basepg_commit_ts  pg_logicalpg_notifypg_serial pg_stat
 pg_subtrans  pg_twophase  pg_xact
global  pg_dynshmem   pg_multixact  pg_replslot  pg_snapshots  pg_stat_tmp
 pg_tblspcpg_wal
[edb@localhost bin]$


Kashif Zeeshan
EnterpriseDB


On Thu, Apr 2, 2020 at 2:58 PM Rajkumar Raghuwanshi <
rajkumar.raghuwan...@enterprisedb.com> wrote:

> Hi Asif,
>
> My colleague Kashif Zeeshan reported an issue off-list, posting here,
> please take a look.
>
> When executing two backups at the same time, getting FATAL error due to
> max_wal_senders and instead of exit  Backup got completed
> And when tried to start the server from the backup cluster, getting error.
>
> [edb@localhost bin]$ ./pgbench -i -s 200 -h localhost -p 5432 postgres
> [edb@localhost bin]$ ./pg_basebackup -v -j 8 -D  /home/edb/Desktop/backup/
> pg_basebackup: initiating base backup, waiting for checkpoint to complete
> pg_basebackup: checkpoint completed
> pg_basebackup: write-ahead log start point: 0/C2000270 on timeline 1
> pg_basebackup: starting background WAL receiver
> pg_basebackup: created temporary replication slot "pg_basebackup_57849"
> pg_basebackup: backup worker (0) created
> pg_basebackup: backup worker (1) created
> pg_basebackup: backup worker (2) created
> pg_basebackup: error: could not connect to server: FATAL:  number of
> requested standby connections exceeds max_wal_senders (currently 10)
> pg_basebackup: backup worker (3) created
> pg_basebackup: error: could not connect to server: FATAL:  number of
> requested standby connections exceeds max_wal_senders (currently 10)
> pg_basebackup: backup worker (4) created
> pg_basebackup: error: could not connect to server: FATAL:  number of
> requested standby connections exceeds max_wal_senders (currently 10)
> pg_basebackup: backup worker (5) created
> pg_basebackup: error: could not connect to server: FATAL:  number of
> requested standby connections exceeds max_wal_senders (currently 10)
> pg_basebackup: backup worker (6) created
> pg_basebackup: error: could not connect to server: FATAL:  number of
> requested standby connections exceeds max_wal_senders (currently 10)
> pg_basebackup: backup worker (7) created
> pg_basebackup: write-ahead log end point: 0/C350
> pg_basebackup: waiting for background process to finish streaming ...
> pg_basebackup: syncing data to disk ...
> pg_basebackup: base backup completed
> [edb@localhost bin]$ ./pg_basebackup -v -j 8 -D
>  /home/edb/Desktop/backup1/
> pg_basebackup: initiating base backup, waiting for checkpoint to complete
> pg_basebackup: checkpoint completed
> pg_basebackup: write-ahead log start point: 0/C20001C0 on timeline 1
> pg_basebackup: starting background WAL receiver
> pg_basebackup: created temporary replication slot "pg_basebackup_57848"
> pg_basebackup: backup worker (0) created
> pg_basebackup: backup worker (1) created
> pg_ba

Re: WIP/PoC for parallel backup

2020-04-02 Thread Kashif Zeeshan
On Thu, Apr 2, 2020 at 4:48 PM Robert Haas  wrote:

> On Thu, Apr 2, 2020 at 7:30 AM Kashif Zeeshan <
> kashif.zees...@enterprisedb.com> wrote:
>
>> The backup failed with errors "error: could not connect to server: could
>> not look up local user ID 1000: Too many open files" when the
>> max_wal_senders was set to 2000.
>> The errors generated for the workers starting from backup worke=1017.
>>
>
> It wasn't the fact that you set max_wal_senders to 2000. It was the fact
> that you specified 1990 parallel workers. By so doing, you overloaded the
> machine, which is why everything failed. That's to be expected.
>
> Thanks alot Robert,
In this case the backup folder was not being emptied as the backup was
failed, the cleanup should be done in this case too.


> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


-- 
Regards

Kashif Zeeshan
Lead Quality Assurance Engineer / Manager

EnterpriseDB Corporation
The Enterprise Postgres Company


Re: WIP/PoC for parallel backup

2020-04-02 Thread Kashif Zeeshan
On Thu, Apr 2, 2020 at 6:23 PM Robert Haas  wrote:

> On Thu, Apr 2, 2020 at 7:55 AM Kashif Zeeshan
>  wrote:
> > Thanks alot Robert,
> > In this case the backup folder was not being emptied as the backup was
> failed, the cleanup should be done in this case too.
>
> Does it fail to clean up the backup folder in all cases where the
> backup failed, or just in this case?
>
The cleanup is done in the cases I have seen so far with base pg_basebackup
functionality (not including the parallel backup feature) with the message
"pg_basebackup: removing contents of data directory"
A similar case was also fixed for parallel backup reported by Rajkumar
where the contents of the backup folder were not cleaned up after the error.

>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


-- 
Regards

Kashif Zeeshan
Lead Quality Assurance Engineer / Manager

EnterpriseDB Corporation
The Enterprise Postgres Company


Re: WIP/PoC for parallel backup

2020-04-03 Thread Kashif Zeeshan
> {
>>
>> *char* backupid[NAMEDATALEN];
>>
>> XLogRecPtr startptr;
>>
>>
>> slock_t lock;
>>
>> int64 throttling_counter;
>>
>> *bool* backup_started_in_recovery;
>>
>> } BackupSharedState;
>>
>>
> The shared state structure entries would be maintained by a shared hash
> table.
> There will be one structure per parallel backup. Since a single parallel
> backup
> can engage more than one wal sender, so I think max_wal_senders might be a
> little
> too much; perhaps max_wal_senders/2 since there will be at least 2
> connections
> per parallel backup? Alternatively, we can set a new GUC that defines the
> maximum
> number of for concurrent parallel backups i.e.
> ‘max_concurent_backups_allowed = 10’
> perhaps, or we can make it user-configurable.
>
> The key would be “backupid=hex_encode(pg_random_strong(16))”
>
> Checking for Standby Promotion:
> At the START_BACKUP command, we initialize
> BackupSharedState.backup_started_in_recovery
> and keep checking it whenever send_file () is called to send a new file.
>
> Throttling:
> BackupSharedState.throttling_counter - The throttling logic remains the
> same
> as for non-parallel backup with the exception that multiple threads will
> now be
> updating it. So in parallel backup, this will represent the overall bytes
> that
> have been transferred. So the workers would sleep if they have exceeded the
> limit. Hence, the shared state carries a lock to safely update the
> throttling
> value atomically.
>
> Progress Reporting:
> Although I think we should add progress-reporting for parallel backup as a
> separate patch. The relevant entries for progress-reporting such as
> ‘backup_total’ and ‘backup_streamed’ would be then added to this structure
> as well.
>
>
> Grammar:
> There is a change in the resultset being returned for START_BACKUP
> command;
> unique_backup_id is added. Additionally, JOIN_BACKUP replication command is
> added. SEND_FILES has been renamed to SEND_FILE. There are no other changes
> to the grammar.
>
> START_BACKUP [LABEL ''] [FAST]
>   - returns startptr, tli, backup_label, unique_backup_id
> STOP_BACKUP [NOWAIT]
>   - returns startptr, tli, backup_label
> JOIN_BACKUP ‘unique_backup_id’
>   - attaches a shared state identified by ‘unique_backup_id’ to a backend
> process.
>
> LIST_TABLESPACES [PROGRESS]
> LIST_FILES [TABLESPACE]
> LIST_WAL_FILES [START_WAL_LOCATION 'X/X'] [END_WAL_LOCATION 'X/X']
> SEND_FILE '(' FILE ')' [NOVERIFY_CHECKSUMS]
>
>
> --
> Asif Rehman
> Highgo Software (Canada/China/Pakistan)
> URL : www.highgo.ca
>
>

-- 
Regards

Kashif Zeeshan
Lead Quality Assurance Engineer / Manager

EnterpriseDB Corporation
The Enterprise Postgres Company


Re: ODBC Source Downloads Missing

2024-06-10 Thread Kashif Zeeshan
Getting the same issue at my end, the error message is "The URL you
specified does not exist.".

On Tue, Jun 11, 2024 at 12:33 AM Mark Hill  wrote:

> Is there an issue with the ODBC Source downloads today?
>
> The source download URL isn’t working:
> https://www.postgresql.org/ftp/odbc/versions/src/
>
>
>
> Thanks, Mark
>


Re: ODBC Source Downloads Missing

2024-06-11 Thread Kashif Zeeshan
On Tue, Jun 11, 2024 at 4:52 PM Fahar Abbas 
wrote:

> Hello Mark,
>
> You can found psqlodbc on Link below
>
> https://www.postgresql.org/ftp/odbc/releases/REL-16_00_0005/
>
> He is talking about Source code whereas this link contains installer.

Regards
Kashif Zeeshan
Bitnine Global

> Kind Regards,
> Fahar Abbas
>
> On Tuesday, June 11, 2024, Mark Hill  wrote:
>
>> Is there an issue with the ODBC Source downloads today?
>>
>> The source download URL isn’t working:
>> https://www.postgresql.org/ftp/odbc/versions/src/
>>
>>
>>
>> Thanks, Mark
>>
>


Re: Proposal: Division operator for (interval / interval => double precision)

2024-06-23 Thread Kashif Zeeshan
Hi

Its always a good idea to extend the functionality of PG.

Thanks
Kashif Zeeshan

On Mon, Jun 24, 2024 at 5:57 AM Gurjeet Singh  wrote:

> Is there a desire to have a division operator / that takes dividend
> and divisor of types interval, and results in a quotient of type
> double precision.
>
> This would be helpful in calculating how many times the divisor
> interval can fit into the dividend interval.
>
> To complement this division operator, it would be desirable to also
> have a remainder operator %.
>
> For example,
>
> ('365 days'::interval / '5 days'::interval) => 73
> ('365 days'::interval % '5 days'::interval) => 0
>
> ('365 days'::interval / '3 days'::interval) => 121
> ('365 days'::interval % '3 days'::interval) => 2
>
> Best regards,
> Gurjeet
> http://Gurje.et
>
>
>


Re: Recommended books for admin

2024-06-25 Thread Kashif Zeeshan
Hi Tom

There is alot of stuff available online, you just need to find it, also the
Official PG documentation is extensive too..

Regards
Kashif Zeeshan

On Tue, Jun 25, 2024 at 7:04 PM Tom Browder  wrote:

> I’m a complete novice, although I’ve dipped my toes in Admin waters a
> couple of times in my many years of using Linux.
>
> Can anyone recommend a good book on installing Postgres on multiple,
> connected multiuser systems, tuning it, managing users, backups, updated,
> etc.
>
> A cookbook/checklist approach would be great. I’ve bought several books
> over the years but a more current one is desirable.
>
> Thanks for any help.
>
> Best regards,
>
> -Tom
>


Re: Doc: fix track_io_timing description to mention pg_stat_io

2024-06-27 Thread Kashif Zeeshan
On Thu, Jun 27, 2024 at 2:06 PM  wrote:

> Hi,
>
> pg_stat_io has I/O statistics that are collected when track_io_timing is
> enabled, but it is not mentioned in the description of track_io_timing.
> I think it's better to add a description of pg_stat_io for easy reference.
> What do you think?
>
Its always good to add new things.

>
> Regards,
> --
> Hajime Matsunaga
> NTT DATA Group Corporation
>


Re: Update platform notes to build Postgres on macos

2024-07-04 Thread Kashif Zeeshan
Hi

I think the documentation should be updated and all pre-reqs must be added.


Regards
Kashif Zeeshan



On Thu, Jul 4, 2024 at 11:02 PM Said Assemlal  wrote:

> Hi,
>
>
> I just built postgresql on macos sonoma (v14) and I had to install the
> following packages:
>
>- * icu - https://ports.macports.org/port/icu/
>- * pkg - https://ports.macports.org/port/pkgconfig/
>
> I don't see anything related to this on
> https://www.postgresql.org/docs/devel/installation-platform-notes.html
>
> Did I miss something ? Should we add a note?
>
> best,
> Saïd
>
>
>


Re: Help update PostgreSQL 13.12 to 13.14

2024-04-25 Thread Kashif Zeeshan
Hi Isaac

You are doing the minor version upgrade so it's not a big effort as
compared to major version upgrade, following is the process to do it.

*Minor releases never change the internal storage format and are always
compatible with earlier and later minor releases of the same major version
number. For example, version 10.1 is compatible with version 10.0 and
version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,
9.5.1, and 9.5.6. To update between compatible versions, you simply replace
the executables while the server is down and restart the server. The data
directory remains unchanged — minor upgrades are that simple.*


Please follow the links below for more information.
https://www.postgresql.org/docs/13/upgrading.html
https://www.postgresql.org/support/versioning/

Thanks
Kashif Zeeshan
Bitnine Global

On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv  wrote:

> Hello everyone, I hope you're doing well. Does anyone have a guide or know
> how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux? I've
> searched in various places but haven't found any solid guides, and truth be
> told, I'm a bit of a novice with PostgreSQL. Any help would be appreciated.
>


Re: Help update PostgreSQL 13.12 to 13.14

2024-04-25 Thread Kashif Zeeshan
On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv  wrote:

> Entiendo si, me han dicho que es sencillo, pero no entiendo si solo
> descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal
> de cómo realizarlo, me  podrías ayudar?
>

Follow the below steps
1. Backup your data
2. Review the release notes of the update release
3. Stop the PG Server
4. Upgrade postgres to newer version, e.g. on CentOS use the command 'sudo
yum update postgresql'
5. Restart PG Server

Thanks
Kashif Zeeshan
Bitnine Global

>
> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan ()
> escribió:
>
>> Hi Isaac
>>
>> You are doing the minor version upgrade so it's not a big effort as
>> compared to major version upgrade, following is the process to do it.
>>
>> *Minor releases never change the internal storage format and are always
>> compatible with earlier and later minor releases of the same major version
>> number. For example, version 10.1 is compatible with version 10.0 and
>> version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,
>> 9.5.1, and 9.5.6. To update between compatible versions, you simply replace
>> the executables while the server is down and restart the server. The data
>> directory remains unchanged — minor upgrades are that simple.*
>>
>>
>> Please follow the links below for more information.
>> https://www.postgresql.org/docs/13/upgrading.html
>> https://www.postgresql.org/support/versioning/
>>
>> Thanks
>> Kashif Zeeshan
>> Bitnine Global
>>
>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv  wrote:
>>
>>> Hello everyone, I hope you're doing well. Does anyone have a guide or
>>> know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux?
>>> I've searched in various places but haven't found any solid guides, and
>>> truth be told, I'm a bit of a novice with PostgreSQL. Any help would be
>>> appreciated.
>>>
>>


Re: Help update PostgreSQL 13.12 to 13.14

2024-04-26 Thread Kashif Zeeshan
On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv  wrote:

> Mira intente con el yum y si actualizó pero sin embargo no actualizo a la
> 13.14
>
> sudo yum update postgresql13
> Updating Subscription Management repositories.
>
> This system is registered with an entitlement server, but is not receiving
> updates. You can use subscription-manager to assign subscriptions.
>
> Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024 10:01:36 AM
> CST.
> Dependencies resolved.
> Nothing to do.
> Complete!
>

It seemed yum is not able to get the latest package update, try clearing
the cache and rebuilding it

yum clean all

yum makecache



>
> El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan ()
> escribió:
>
>>
>>
>> On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv 
>> wrote:
>>
>>> Entiendo si, me han dicho que es sencillo, pero no entiendo si solo
>>> descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo tal
>>> de cómo realizarlo, me  podrías ayudar?
>>>
>>
>> Follow the below steps
>> 1. Backup your data
>> 2. Review the release notes of the update release
>> 3. Stop the PG Server
>> 4. Upgrade postgres to newer version, e.g. on CentOS use the command
>> 'sudo yum update postgresql'
>> 5. Restart PG Server
>>
>> Thanks
>> Kashif Zeeshan
>> Bitnine Global
>>
>>>
>>> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<
>>> kashi.zees...@gmail.com>) escribió:
>>>
>>>> Hi Isaac
>>>>
>>>> You are doing the minor version upgrade so it's not a big effort as
>>>> compared to major version upgrade, following is the process to do it.
>>>>
>>>> *Minor releases never change the internal storage format and are always
>>>> compatible with earlier and later minor releases of the same major version
>>>> number. For example, version 10.1 is compatible with version 10.0 and
>>>> version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,
>>>> 9.5.1, and 9.5.6. To update between compatible versions, you simply replace
>>>> the executables while the server is down and restart the server. The data
>>>> directory remains unchanged — minor upgrades are that simple.*
>>>>
>>>>
>>>> Please follow the links below for more information.
>>>> https://www.postgresql.org/docs/13/upgrading.html
>>>> https://www.postgresql.org/support/versioning/
>>>>
>>>> Thanks
>>>> Kashif Zeeshan
>>>> Bitnine Global
>>>>
>>>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv 
>>>> wrote:
>>>>
>>>>> Hello everyone, I hope you're doing well. Does anyone have a guide or
>>>>> know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on Linux?
>>>>> I've searched in various places but haven't found any solid guides, and
>>>>> truth be told, I'm a bit of a novice with PostgreSQL. Any help would be
>>>>> appreciated.
>>>>>
>>>>


Re: New committers: Melanie Plageman, Richard Guo

2024-04-26 Thread Kashif Zeeshan
Congratulations!

On Sat, Apr 27, 2024 at 11:34 AM Andrey M. Borodin 
wrote:

>
>
> > On 26 Apr 2024, at 16:54, Jonathan S. Katz  wrote:
> >
> > The Core Team would like to extend our congratulations to Melanie
> Plageman and Richard Guo, who have accepted invitations to become our
> newest PostgreSQL committers.
> >
> > Please join us in wishing them much success and few reverts!
>
> Congratulations!
>
>
> Best regards, Andrey Borodin.
>
>


Re: Help update PostgreSQL 13.12 to 13.14

2024-04-27 Thread Kashif Zeeshan
Glad to be of help.
pg_uprade is used with major version upgrade e.g. from PG13 to 14 etc

Regards
Kashif Zeeshan
Bitnine Global

On Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv  wrote:

> Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique otra
> vez el   sudo yum update postgresql13 y me devolvió otra vez el mensaje que
> ya no tiene más actualizaciones pendientes
> Veo que esta el pg_upgrade, pero no entiendo bien cómo usarlo
>
> Saludos y muchas gracias
>
> El vie, 26 abr 2024 a las 11:34, Kashif Zeeshan ()
> escribió:
>
>>
>>
>> On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv  wrote:
>>
>>> Mira intente con el yum y si actualizó pero sin embargo no actualizo a
>>> la 13.14
>>>
>>> sudo yum update postgresql13
>>> Updating Subscription Management repositories.
>>>
>>> This system is registered with an entitlement server, but is not
>>> receiving updates. You can use subscription-manager to assign subscriptions.
>>>
>>> Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024 10:01:36
>>> AM CST.
>>> Dependencies resolved.
>>> Nothing to do.
>>> Complete!
>>>
>>
>> It seemed yum is not able to get the latest package update, try clearing
>> the cache and rebuilding it
>>
>> yum clean all
>>
>> yum makecache
>>
>>
>>
>>>
>>> El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<
>>> kashi.zees...@gmail.com>) escribió:
>>>
>>>>
>>>>
>>>> On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv 
>>>> wrote:
>>>>
>>>>> Entiendo si, me han dicho que es sencillo, pero no entiendo si solo
>>>>> descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo 
>>>>> tal
>>>>> de cómo realizarlo, me  podrías ayudar?
>>>>>
>>>>
>>>> Follow the below steps
>>>> 1. Backup your data
>>>> 2. Review the release notes of the update release
>>>> 3. Stop the PG Server
>>>> 4. Upgrade postgres to newer version, e.g. on CentOS use the command
>>>> 'sudo yum update postgresql'
>>>> 5. Restart PG Server
>>>>
>>>> Thanks
>>>> Kashif Zeeshan
>>>> Bitnine Global
>>>>
>>>>>
>>>>> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<
>>>>> kashi.zees...@gmail.com>) escribió:
>>>>>
>>>>>> Hi Isaac
>>>>>>
>>>>>> You are doing the minor version upgrade so it's not a big effort as
>>>>>> compared to major version upgrade, following is the process to do it.
>>>>>>
>>>>>> *Minor releases never change the internal storage format and are
>>>>>> always compatible with earlier and later minor releases of the same major
>>>>>> version number. For example, version 10.1 is compatible with version 10.0
>>>>>> and version 10.6. Similarly, for example, 9.5.3 is compatible with 9.5.0,
>>>>>> 9.5.1, and 9.5.6. To update between compatible versions, you simply 
>>>>>> replace
>>>>>> the executables while the server is down and restart the server. The data
>>>>>> directory remains unchanged — minor upgrades are that simple.*
>>>>>>
>>>>>>
>>>>>> Please follow the links below for more information.
>>>>>> https://www.postgresql.org/docs/13/upgrading.html
>>>>>> https://www.postgresql.org/support/versioning/
>>>>>>
>>>>>> Thanks
>>>>>> Kashif Zeeshan
>>>>>> Bitnine Global
>>>>>>
>>>>>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv 
>>>>>> wrote:
>>>>>>
>>>>>>> Hello everyone, I hope you're doing well. Does anyone have a guide
>>>>>>> or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on 
>>>>>>> Linux?
>>>>>>> I've searched in various places but haven't found any solid guides, and
>>>>>>> truth be told, I'm a bit of a novice with PostgreSQL. Any help would be
>>>>>>> appreciated.
>>>>>>>
>>>>>>


Re: Read table rows in chunks

2024-04-27 Thread Kashif Zeeshan
Hi

You can also use the following approaches.

1. Cursors
2. FETCH with OFFSET clause

Regards
Kashif Zeeshan
Bitnine Global

On Sat, Apr 27, 2024 at 12:47 PM Sushrut Shivaswamy <
sushrut.shivasw...@gmail.com> wrote:

> Hey,
>
> I"m trying to read the rows of a table in chunks to process them in a
> background worker.
> I want to ensure that each row is processed only once.
>
> I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT
> {limit_size}` functionality for this but I"m running into issues.
>
> Some approaches I had in mind that aren't working out:
>  - Try to use the transaction id to query rows created since the last
> processed transaction id
>   - It seems Postgres does not expose row transaction ids so this
> approach is not feasible
>  - Rely on OFFSET / LIMIT combination to query the next chunk of data
>   - SELECT * does not guarantee ordering of rows so it's possible
> older rows repeat or newer rows are missed in a chunk
>
> Can you please suggest any alternative to periodically read rows from a
> table in chunks while processing each row exactly once.
>
> Thanks,
> Sushrut
>
>
>
>


Re: small documentation fixes related to collations/ICU

2024-04-29 Thread Kashif Zeeshan
Looks good.

On Mon, Apr 29, 2024 at 12:05 PM Peter Eisentraut 
wrote:

> I found two mistakes related to collation and/or ICU support in the
> documentation that should probably be fixed and backpatched.  See
> attached patches.


Re: Help update PostgreSQL 13.12 to 13.14

2024-04-29 Thread Kashif Zeeshan
On Mon, Apr 29, 2024 at 9:07 PM •Isaac Rv  wrote:

> Ok entiendo sí, pero mi versión sigue en la 13.12 y necesito que sea
> 13.14, me indica que ya no tiene actualizaciones pero realmente sí, ya no
> sé cómo actualizarla a la 13.14
>

Hi

Please make sure that your postgres repository is set properly, that's the
only reason that it's not finding V13.14. Please follow the link below.

https://www.postgresql.org/download/linux/redhat/

There is another way to avoid it by downloading the V13.14 on your system
and then install this version on your system which will upgrade your
existing installation.

Regards
Kashif Zeeshan
Bitnine Global

>
> El sáb, 27 abr 2024 a las 9:29, Kashif Zeeshan ()
> escribió:
>
>> Glad to be of help.
>> pg_uprade is used with major version upgrade e.g. from PG13 to 14 etc
>>
>> Regards
>> Kashif Zeeshan
>> Bitnine Global
>>
>> On Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv 
>> wrote:
>>
>>> Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique
>>> otra vez el   sudo yum update postgresql13 y me devolvió otra vez el
>>> mensaje que ya no tiene más actualizaciones pendientes
>>> Veo que esta el pg_upgrade, pero no entiendo bien cómo usarlo
>>>
>>> Saludos y muchas gracias
>>>
>>> El vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<
>>> kashi.zees...@gmail.com>) escribió:
>>>
>>>>
>>>>
>>>> On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv 
>>>> wrote:
>>>>
>>>>> Mira intente con el yum y si actualizó pero sin embargo no actualizo a
>>>>> la 13.14
>>>>>
>>>>> sudo yum update postgresql13
>>>>> Updating Subscription Management repositories.
>>>>>
>>>>> This system is registered with an entitlement server, but is not
>>>>> receiving updates. You can use subscription-manager to assign 
>>>>> subscriptions.
>>>>>
>>>>> Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024
>>>>> 10:01:36 AM CST.
>>>>> Dependencies resolved.
>>>>> Nothing to do.
>>>>> Complete!
>>>>>
>>>>
>>>> It seemed yum is not able to get the latest package update, try
>>>> clearing the cache and rebuilding it
>>>>
>>>> yum clean all
>>>>
>>>> yum makecache
>>>>
>>>>
>>>>
>>>>>
>>>>> El jue, 25 abr 2024 a las 23:16, Kashif Zeeshan (<
>>>>> kashi.zees...@gmail.com>) escribió:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Apr 25, 2024 at 11:55 PM •Isaac Rv 
>>>>>> wrote:
>>>>>>
>>>>>>> Entiendo si, me han dicho que es sencillo, pero no entiendo si solo
>>>>>>> descargo los binarios y en cual carpeta reemplazo? no hay una guía cómo 
>>>>>>> tal
>>>>>>> de cómo realizarlo, me  podrías ayudar?
>>>>>>>
>>>>>>
>>>>>> Follow the below steps
>>>>>> 1. Backup your data
>>>>>> 2. Review the release notes of the update release
>>>>>> 3. Stop the PG Server
>>>>>> 4. Upgrade postgres to newer version, e.g. on CentOS use the command
>>>>>> 'sudo yum update postgresql'
>>>>>> 5. Restart PG Server
>>>>>>
>>>>>> Thanks
>>>>>> Kashif Zeeshan
>>>>>> Bitnine Global
>>>>>>
>>>>>>>
>>>>>>> El jue, 25 abr 2024 a las 11:20, Kashif Zeeshan (<
>>>>>>> kashi.zees...@gmail.com>) escribió:
>>>>>>>
>>>>>>>> Hi Isaac
>>>>>>>>
>>>>>>>> You are doing the minor version upgrade so it's not a big effort as
>>>>>>>> compared to major version upgrade, following is the process to do it.
>>>>>>>>
>>>>>>>> *Minor releases never change the internal storage format and are
>>>>>>>> always compatible with earlier and later minor releases of the same 
>>>>>>>> major
>>>>>>>> version number. For example, version 10.1 is compatible with version 
>>>>>>>> 10.0
>>>>>>>> and version 10.6. Similarly, for example, 9.5.3 is compatible with 
>>>>>>>> 9.5.0,
>>>>>>>> 9.5.1, and 9.5.6. To update between compatible versions, you simply 
>>>>>>>> replace
>>>>>>>> the executables while the server is down and restart the server. The 
>>>>>>>> data
>>>>>>>> directory remains unchanged — minor upgrades are that simple.*
>>>>>>>>
>>>>>>>>
>>>>>>>> Please follow the links below for more information.
>>>>>>>> https://www.postgresql.org/docs/13/upgrading.html
>>>>>>>> https://www.postgresql.org/support/versioning/
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> Kashif Zeeshan
>>>>>>>> Bitnine Global
>>>>>>>>
>>>>>>>> On Thu, Apr 25, 2024 at 9:37 PM •Isaac Rv 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hello everyone, I hope you're doing well. Does anyone have a guide
>>>>>>>>> or know how to perform an upgrade from PostgreSQL 13.12 to 13.14 on 
>>>>>>>>> Linux?
>>>>>>>>> I've searched in various places but haven't found any solid guides, 
>>>>>>>>> and
>>>>>>>>> truth be told, I'm a bit of a novice with PostgreSQL. Any help would 
>>>>>>>>> be
>>>>>>>>> appreciated.
>>>>>>>>>
>>>>>>>>


Re: Document NULL

2024-05-01 Thread Kashif Zeeshan
On Wed, May 1, 2024 at 8:12 PM David G. Johnston 
wrote:

> Hi,
>
> Over in [1] it was rediscovered that our documentation assumes the reader
> is familiar with NULL.  It seems worthwhile to provide both an introduction
> to the topic and an overview of how this special value gets handled
> throughout the system.
>
> Attached is a very rough draft attempting this, based on my own thoughts
> and those expressed by Tom in [1], which largely align with mine.
>
> I'll flesh this out some more once I get support for the goal, content,
> and placement.  On that point, NULL is a fundamental part of the SQL
> language and so having it be a section in a Chapter titled "SQL Language"
> seems to fit well, even if that falls into our tutorial.  Framing this up
> as tutorial content won't be that hard, though I've skipped on examples and
> such pending feedback.  It really doesn't fit as a top-level chapter under
> part II nor really under any of the other chapters there.  The main issue
> with the tutorial is the forward references to concepts not yet discussed
> but problem points there can be addressed.
>
> I do plan to remove the entity reference and place the content into
> query.sgml directly in the final version.  It is just much easier to write
> an entire new section in its own file.
>

Reviewed the documentation update and it's quite extensive, but I think
it's better to include some examples as well.

Regards
Kashif Zeeshan

>
> David J.
>
> [1]
> https://www.postgresql.org/message-id/1859814.1714532025%40sss.pgh.pa.us
>
>


Re: Document NULL

2024-05-02 Thread Kashif Zeeshan
Hi David

I reviewed the documentation and it's very detailed.

Thanks
Kashif Zeeshan
Bitnine Global

On Thu, May 2, 2024 at 8:24 PM David G. Johnston 
wrote:

> On Wed, May 1, 2024 at 9:47 PM Tom Lane  wrote:
>
>> David Rowley  writes:
>> > Let's bash it into shape a bit more before going any further on actual
>> wording.
>>
>> FWIW, I want to push back on the idea of making it a tutorial section.
>> I too considered that, but in the end I think it's a better idea to
>> put it into the "main" docs, for two reasons:
>>
>>
> Version 2 attached.  Still a draft, focused on topic picking and overall
> structure.  Examples and links planned plus the usual semantic markup stuff.
>
> I chose to add a new sect1 in the user guide (The SQL Language) chapter,
> "Data".  Don't tell Robert.
>
> The "Data Basics" sub-section lets us readily slide this Chapter into the
> main flow and here the NULL discussion feels like a natural fit.  In
> hindsight, the lack of a Data chapter in a Database manual seems like an
> oversight.  One easily made because we assume if you are here you "know"
> what data is, but there is still stuff to be discussed, if nothing else to
> establish a common understanding between us and our users.
>
> David J.
>
>
>


Re: Help update PostgreSQL 13.12 to 13.14

2024-05-05 Thread Kashif Zeeshan
Hi

Upgrade works when you have an existing Postgres installation with server
running.
If you run the following command then it will upgrade the
existing installation of postgre.
sudo dnf install -y postgresql13-server

But you don't need to execute the below commands as this will create data
directory and start the server on it.
sudo /usr/pgsql-13/bin/postgresql-13-setup initdb
sudo systemctl enable postgresql-13
sudo systemctl start postgresql-13

You just need to install the version of postgres you need and it will
upgrade the existing installation and you just need to restart the server.
sudo systemctl restart postgresql-13

The following commands you mentioned are going to setup the repos for
postgres which will used to download and install postgres packages.
# Install the RPM from the repository:
sudo dnf install -y
https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
But you done need to disable it as this will disable the repo you installed
above.
sudo dnf -qy module disables postgresql


Regards
Kashif Zeeshan
Bitnine Global

On Fri, May 3, 2024 at 7:28 PM •Isaac Rv  wrote:

> Hola, estos son los pasos que me dan
>
> Pero esto es solamente para instalar el Yum? O instala una instancia nueva
> de PostgreSQL?
> Y que pasa si ya esta instalado el yum pero mal configurado cómo bien
> dices?
>
> # Instalar el RPM del repositorio:
> sudo dnf install -y
> https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
>
> sudo dnf -qy módulo deshabilita postgresql
>
> sudo dnf install -y postgresql13-servidor
>
> sudo /usr/pgsql-13/bin/postgresql-13-setup initdb
> sudo systemctl habilitar postgresql-13
> sudo systemctl iniciar postgresql-13
>
>
> Quedo atento
>
>
> Saludos
>
> El lun, 29 abr 2024 a las 21:59, Kashif Zeeshan ()
> escribió:
>
>>
>>
>> On Mon, Apr 29, 2024 at 9:07 PM •Isaac Rv  wrote:
>>
>>> Ok entiendo sí, pero mi versión sigue en la 13.12 y necesito que sea
>>> 13.14, me indica que ya no tiene actualizaciones pero realmente sí, ya no
>>> sé cómo actualizarla a la 13.14
>>>
>>
>> Hi
>>
>> Please make sure that your postgres repository is set properly, that's
>> the only reason that it's not finding V13.14. Please follow the link below.
>>
>> https://www.postgresql.org/download/linux/redhat/
>>
>> There is another way to avoid it by downloading the V13.14 on your system
>> and then install this version on your system which will upgrade your
>> existing installation.
>>
>> Regards
>> Kashif Zeeshan
>> Bitnine Global
>>
>>>
>>> El sáb, 27 abr 2024 a las 9:29, Kashif Zeeshan ()
>>> escribió:
>>>
>>>> Glad to be of help.
>>>> pg_uprade is used with major version upgrade e.g. from PG13 to 14 etc
>>>>
>>>> Regards
>>>> Kashif Zeeshan
>>>> Bitnine Global
>>>>
>>>> On Fri, Apr 26, 2024 at 10:47 PM •Isaac Rv 
>>>> wrote:
>>>>
>>>>> Hola, lo acabo de hacer, quedó bien luego detuve el servidor, aplique
>>>>> otra vez el   sudo yum update postgresql13 y me devolvió otra vez el
>>>>> mensaje que ya no tiene más actualizaciones pendientes
>>>>> Veo que esta el pg_upgrade, pero no entiendo bien cómo usarlo
>>>>>
>>>>> Saludos y muchas gracias
>>>>>
>>>>> El vie, 26 abr 2024 a las 11:34, Kashif Zeeshan (<
>>>>> kashi.zees...@gmail.com>) escribió:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Apr 26, 2024 at 9:22 PM •Isaac Rv 
>>>>>> wrote:
>>>>>>
>>>>>>> Mira intente con el yum y si actualizó pero sin embargo no actualizo
>>>>>>> a la 13.14
>>>>>>>
>>>>>>> sudo yum update postgresql13
>>>>>>> Updating Subscription Management repositories.
>>>>>>>
>>>>>>> This system is registered with an entitlement server, but is not
>>>>>>> receiving updates. You can use subscription-manager to assign 
>>>>>>> subscriptions.
>>>>>>>
>>>>>>> Last metadata expiration check: 0:07:02 ago on Fri 26 Apr 2024
>>>>>>> 10:01:36 AM CST.
>>>>>>> Dependencies resolved.
>>>>>>> Nothing to do.
>>>>>>> Complete!
>>>>>>>
>>>>>>
>>>>>> It seemed yum is

Re:

2024-05-09 Thread Kashif Zeeshan
Hi


On Thu, May 9, 2024 at 2:50 PM Rajan Pandey 
wrote:

> Hi everyone, I just installed Postgres and pg_tle extension as I was
> looking to contribute to pg_tle.
>
> Somehow, I am unable to update the shared_preload_libraries. It feels like
> ALTER has happened but the SPL value is not updated:
>
>> test=# show shared_preload_libraries;
>>  shared_preload_libraries
>> --
>>
>> (1 row)
>>
>> test=# ALTER SYSTEM SET shared_preload_libraries TO 'pg_tle';
>> ALTER SYSTEM
>> test=# SELECT pg_reload_conf();
>>  pg_reload_conf
>> 
>>  t
>> (1 row)
>>
>> test=# show shared_preload_libraries;
>>  shared_preload_libraries
>> --
>>
>> (1 row)
>>
>> test=#
>>
>
>
I'm unable to open the postgresql.conf file to update it either. I provided
> the correct macbook password above. But it is not accepted! :/
>
>> rajanx@b0be835adb74 postgresql % cat
>> /usr/local/pgsql/data/postgresql.auto.conf
>> cat: /usr/local/pgsql/data/postgresql.auto.conf: Permission denied
>
> rajanx@b0be835adb74 postgresql % su cat
>> /usr/local/pgsql/data/postgresql.auto.conf
>> Password:
>> su: Sorry
>>
>
The issue is related with permissions, please make sure which user did the
installation and update the  postgresql.auto.conf file with that user
permissions.

Regards
Kashif Zeeshan
Bitnine Global

>
> Please help! Thank you. :)
> --
> Regards
> Rajan Pandey
>


Re: PostgreSQL 17 Beta 1 release announcement draft

2024-05-15 Thread Kashif Zeeshan
Hi Jonathan

Did the review and did not find any issues.

Regards
Kashif Zeeshan
Bitnine Global

On Thu, May 16, 2024 at 6:45 AM Jonathan S. Katz 
wrote:

> Hi,
>
> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement
> draft. This contains a user-facing summary of some of the features that
> will be available in the Beta, as well as a call to test. I've made an
> effort to group them logically around the different workflows they affect.
>
> A few notes:
>
> * The section with the features is not 80-char delimited. I will do that
> before the final copy
>
> * There is an explicit callout that we've added in the SQL/JSON features
> that were previously reverted in PG15. I want to ensure we're
> transparent about that, but also use it as a hook to get people testing.
>
> When reviewing:
>
> * Please check for correctness of feature descriptions, keeping in mind
> this is targeted for a general audience
>
> * Please indicate if you believe there's a notable omission, or if we
> should omit any of these callouts
>
> * Please indicate if a description is confusing - I'm happy to rewrite
> to ensure it's clearer.
>
> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the
> beta release takes some extra effort, I want to ensure all changes are
> in with time to spare before release day.
>
> Thanks,
>
> Jonathan
>


Re: Pre-Commitfest Party on StHighload conf

2024-05-15 Thread Kashif Zeeshan
Great initiative.

On Thu, May 16, 2024 at 10:59 AM Andrey M. Borodin 
wrote:

> Hi hackers!
>
> StHighload conference will be held on June 24-25[0]. I’m planning to do
> “Pre-Commitfest Party” there.
>
> The idea is to help promote patches among potential reviewers. And start
> working with the very beginning of PG18 development cycle.
> Good patch review of a valuable feature is a great addition to a CV, and
> we will advertise this fact among conference attendees.
>
> If you are the patch author, can be around on conference dates and willing
> to present your patch - please contact me or just fill the registration
> form [1].
>
> Postgres Professional will organize the event, provide us ~1h of a stage
> time and unlimited backstage discussion in their tent. I’ll serve as a
> moderator, and maybe present something myself.
> If your work is not on Commitfest yet, but you are planning to finish a
> prototype by the end of the June - feel free to register anyway.
> If you do not have a ticket to StHighload - we have some speaker entrance
> tickets.
> At the moment we have 4 potential patch authors ready to present.
>
> Please contact me with any questions regarding the event. Thanks!
>
>
> Best regards, Andrey Borodin.
>
> [0] https://highload.ru/spb/2024/
> [1] https://forms.yandex.ru/u/6634e043c417f3cae70775a6/
>
>


Re: Switch background worker on/off in runtime.

2024-05-31 Thread Kashif Zeeshan
Hi ISHAN



On Fri, May 31, 2024 at 2:28 PM ISHAN CHHANGANI . <
f20200...@hyderabad.bits-pilani.ac.in> wrote:

> Hi,
>
> Is it possible to switch on/off a background worker in runtime?
>
As per my understanding there is no such way to do it on runtime. But you
can kill it by using the following command

select pg_terminate_backend(pid of bgworker);

Regards
Kashif Zeeshan
Bitnine Global

>
> worker.bgw_flags = BGWORKER_SHMEM_ACCESS;
>
> worker.bgw_start_time = BgWorkerStart_PostmasterStart;
>
>
>
> I want to switch off the worker based on some flag value, etc, either from
> the main process or the worker itself.
>
>
> Are there any already existing examples?
>
> Thanks,
>
> Ishan.
>
> The information contained in this electronic communication is intended
> solely for the individual(s) or entity to which it is addressed. It may
> contain proprietary, confidential and/or legally privileged information.
> Any review, retransmission, dissemination, printing, copying or other use
> of, or taking any action in reliance on the contents of this information by
> person(s) or entities other than the intended recipient is strictly
> prohibited and may be unlawful. If you have received this communication in
> error, please notify us by responding to this email or telephone and
> immediately and permanently delete all copies of this message and any
> attachments from your system(s). The contents of this message do not
> necessarily represent the views or policies of BITS Pilani.
>


Re: cannot drop intarray extension

2024-06-02 Thread Kashif Zeeshan
Hi Jian

On Mon, Jun 3, 2024 at 9:14 AM jian he  wrote:

> hi.
>
>  setup
> drop table if exist test__int cascade;
> create extension intarray;
>
> CREATE TABLE test__int( a int[] );
> CREATE INDEX text_idx on test__int using gist (a gist__intbig_ops(siglen =
> 1));
> drop extension intarray cascade;
> NOTICE:  drop cascades to index text_idx
> 2024-06-03 11:53:32.629 CST [41165] ERROR:  cache lookup failed for
> function 17758
>

Its a bug.


> 2024-06-03 11:53:32.629 CST [41165] STATEMENT:  drop extension intarray
> cascade;
> ERROR:  cache lookup failed for function 17758
>
> 
> backtrace info:
> index_getprocinfo
> #0  index_opclass_options (indrel=0x7faeca727b58, attnum=1,
> attoptions=94372901674408, validate=false)
> at
> ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:1034
> #1  0x55d4e63a79cb in RelationGetIndexAttOptions
> (relation=0x7faeca727b58, copy=false)
> at
> ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:5872
> #2  0x55d4e639d72d in RelationInitIndexAccessInfo
> (relation=0x7faeca727b58)
> at
> ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1569
> #3  0x55d4e639c5ac in RelationBuildDesc (targetRelId=24582,
> insertIt=true)
> at
> ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:1207
> #4  0x55d4e639e9ce in RelationIdGetRelation (relationId=24582)
> at
> ../../Desktop/pg_src/src4/postgres/src/backend/utils/cache/relcache.c:2115
> #5  0x55d4e5a412fd in relation_open (relationId=24582, lockmode=8)
> at
> ../../Desktop/pg_src/src4/postgres/src/backend/access/common/relation.c:58
> #6  0x55d4e5ae6a06 in index_open (relationId=24582, lockmode=8)
> at
> ../../Desktop/pg_src/src4/postgres/src/backend/access/index/indexam.c:137
> #7  0x55d4e5be61b8 in index_drop (indexId=24582, concurrent=false,
> concurrent_lock_mode=false)
> at ../../Desktop/pg_src/src4/postgres/src/backend/catalog/index.c:2156
> 
> i guess it's because we first dropped the function g_intbig_options
> then later we need it.
>
>
>


Re: WIP/PoC for parallel backup

2020-04-07 Thread Kashif Zeeshan
On Fri, Apr 3, 2020 at 3:01 PM Kashif Zeeshan <
kashif.zees...@enterprisedb.com> wrote:

> Hi Asif
>
> When a non-existent slot is used with tablespace then correct error is
> displayed but then the backup folder is not cleaned and leaves a corrupt
> backup.
>
> Steps
> ===
>
> edb@localhost bin]$
> [edb@localhost bin]$ mkdir /home/edb/tbl1
> [edb@localhost bin]$ mkdir /home/edb/tbl_res
> [edb@localhost bin]$
> postgres=#  create tablespace tbl1 location '/home/edb/tbl1';
> CREATE TABLESPACE
> postgres=#
> postgres=# create table t1 (a int) tablespace tbl1;
> CREATE TABLE
> postgres=# insert into t1 values(100);
> INSERT 0 1
> postgres=# insert into t1 values(200);
> INSERT 0 1
> postgres=# insert into t1 values(300);
> INSERT 0 1
> postgres=#
>
>
> [edb@localhost bin]$
> [edb@localhost bin]$  ./pg_basebackup -v -j 2 -D
>  /home/edb/Desktop/backup/ -T /home/edb/tbl1=/home/edb/tbl_res -S test
> pg_basebackup: initiating base backup, waiting for checkpoint to complete
> pg_basebackup: checkpoint completed
> pg_basebackup: write-ahead log start point: 0/2E28 on timeline 1
> pg_basebackup: starting background WAL receiver
> pg_basebackup: error: could not send replication command
> "START_REPLICATION": ERROR:  replication slot "test" does not exist
> pg_basebackup: backup worker (0) created
> pg_basebackup: backup worker (1) created
> pg_basebackup: write-ahead log end point: 0/2E000100
> pg_basebackup: waiting for background process to finish streaming ...
> pg_basebackup: error: child thread exited with error 1
> [edb@localhost bin]$
>
> backup folder not cleaned
>
> [edb@localhost bin]$
> [edb@localhost bin]$
> [edb@localhost bin]$
> [edb@localhost bin]$ ls /home/edb/Desktop/backup
> backup_label  globalpg_dynshmem  pg_ident.conf  pg_multixact
>  pg_replslot  pg_snapshots  pg_stat_tmp  pg_tblspcPG_VERSION  pg_xact
> postgresql.conf
> base  pg_commit_ts  pg_hba.conf  pg_logical pg_notify
> pg_serialpg_stat   pg_subtrans  pg_twophase  pg_wal
>  postgresql.auto.conf
> [edb@localhost bin]$
>
>
>
>
> If the same case is executed without the parallel backup patch then the
> backup folder is cleaned after the error is displayed.
>
> [edb@localhost bin]$ ./pg_basebackup -v -D  /home/edb/Desktop/backup/ -T
> /home/edb/tbl1=/home/edb/tbl_res -S test999
> pg_basebackup: initiating base backup, waiting for checkpoint to complete
> pg_basebackup: checkpoint completed
> pg_basebackup: write-ahead log start point: 0/2B28 on timeline 1
> pg_basebackup: starting background WAL receiver
> pg_basebackup: error: could not send replication command
> "START_REPLICATION": ERROR:  replication slot "test999" does not exist
> pg_basebackup: write-ahead log end point: 0/2B000100
> pg_basebackup: waiting for background process to finish streaming ...
> pg_basebackup: error: child process exited with exit code 1
> *pg_basebackup: removing data directory " /home/edb/Desktop/backup"*
> pg_basebackup: changes to tablespace directories will not be undone
>


Hi Asif

A similar case is when DB Server is shut down while the Parallel Backup is
in progress then the correct error is displayed but then the backup folder
is not cleaned and leaves a corrupt backup. I think one bug fix will solve
all these cases where clean up is not done when parallel backup is failed.

[edb@localhost bin]$
[edb@localhost bin]$
[edb@localhost bin]$  ./pg_basebackup -v -D  /home/edb/Desktop/backup/ -j 8
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/C128 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_57337"
pg_basebackup: backup worker (0) created
pg_basebackup: backup worker (1) created
pg_basebackup: backup worker (2) created
pg_basebackup: backup worker (3) created
pg_basebackup: backup worker (4) created
pg_basebackup: backup worker (5) created
pg_basebackup: backup worker (6) created
pg_basebackup: backup worker (7) created
pg_basebackup: error: could not read COPY data: server closed the
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_basebackup: error: could not read COPY data: server closed the
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[edb@localhost bin]$
[edb@localhost bin]$

Same case when executed on pg_basebackup without the Parallel backup patch
then proper clean up is done.

[edb@localhost bin]$
[edb@localhost bin]$  ./pg_basebackup -v -D  /home/edb/Deskt

Re: WIP/PoC for parallel backup

2020-04-08 Thread Kashif Zeeshan
2ca0) at
pg_basebackup.c:2811
#6  0x0040798f in BaseBackup () at pg_basebackup.c:2211
#7  0x00408b4d in main (argc=6, argv=0x7ffe3dabc718) at
pg_basebackup.c:2765
(gdb)




b) When executing two backups at the same time, getting FATAL error due to
max_wal_senders and instead of exit  Backup got completed.

[edb@localhost bin]$
[edb@localhost bin]$
[edb@localhost bin]$  ./pg_basebackup -v -j 8 -D  /home/edb/Desktop/backup1/
pg_basebackup: warning: backup manifest is disabled in parallel backup mode
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 1/DA28 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_17066"
pg_basebackup: backup worker (0) created
pg_basebackup: backup worker (1) created
pg_basebackup: backup worker (2) created
pg_basebackup: backup worker (3) created
pg_basebackup: backup worker (4) created
pg_basebackup: backup worker (5) created
pg_basebackup: backup worker (6) created
pg_basebackup: error: could not connect to server: FATAL:  number of
requested standby connections exceeds max_wal_senders (currently 10)
Segmentation fault (core dumped)
[edb@localhost bin]$
[edb@localhost bin]$
[edb@localhost bin]$ gdb pg_basebackup
/tmp/cores/core.pg_basebackup.17041.localhost.localdomain.1586353696
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-115.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html
>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from
/home/edb/Communtiy_Parallel_backup/postgresql/inst/bin/pg_basebackup...done.
[New LWP 17041]
[New LWP 17067]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `./pg_basebackup -v -j 8 -D
/home/edb/Desktop/backup1/'.
Program terminated with signal 11, Segmentation fault.
#0  pthread_join (threadid=0, thread_return=0x0) at pthread_join.c:47
47  if (INVALID_NOT_TERMINATED_TD_P (pd))
(gdb) bt
#0  pthread_join (threadid=0, thread_return=0x0) at pthread_join.c:47
#1  0x0040904a in cleanup_workers () at pg_basebackup.c:2978
#2  0x00403806 in disconnect_atexit () at pg_basebackup.c:332
#3  0x7f051edc1a49 in __run_exit_handlers (status=1,
listp=0x7f051f1436c8 <__exit_funcs>,
run_list_atexit=run_list_atexit@entry=true)
at exit.c:77
#4  0x7f051edc1a95 in __GI_exit (status=) at exit.c:99
#5  0x00408c54 in create_parallel_workers (backupinfo=0x1c6dca0) at
pg_basebackup.c:2811
#6  0x0040798f in BaseBackup () at pg_basebackup.c:2211
#7  0x00408b4d in main (argc=6, argv=0x7ffdb76a6d68) at
pg_basebackup.c:2765
(gdb)




2) The following bug is not fixed yet

A similar case is when DB Server is shut down while the Parallel Backup is
in progress then the correct error is displayed but then the backup folder
is not cleaned and leaves a corrupt backup.

[edb@localhost bin]$
[edb@localhost bin]$ ./pg_basebackup -v -D  /home/edb/Desktop/backup/ -j 8
pg_basebackup: warning: backup manifest is disabled in parallel backup mode
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/A028 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_16235"
pg_basebackup: backup worker (0) created
pg_basebackup: backup worker (1) created
pg_basebackup: backup worker (2) created
pg_basebackup: backup worker (3) created
pg_basebackup: backup worker (4) created
pg_basebackup: backup worker (5) created
pg_basebackup: backup worker (6) created
pg_basebackup: backup worker (7) created
pg_basebackup: error: could not read COPY data: server closed the
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_basebackup: error: could not read COPY data: server closed the
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_basebackup: removing contents of data directory
"/home/edb/Desktop/backup/"
pg_basebackup: error: could not read COPY data: server closed the
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[edb@localhost bin]$
[edb@localhost bin]$
[edb@localhost bin]$



[edb@localhost bin]$
[edb@localhost bin]$ ls /home/edb/Desktop/backup
base   

Re: WIP/PoC for parallel backup

2020-04-14 Thread Kashif Zeeshan
Hi Asif

Getting the following error on Parallel backup when --no-manifest option is
used.

[edb@localhost bin]$
[edb@localhost bin]$
[edb@localhost bin]$  ./pg_basebackup -v -j 5  -D
 /home/edb/Desktop/backup/ --no-manifest
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/228 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_10223"
pg_basebackup: backup worker (0) created
pg_basebackup: backup worker (1) created
pg_basebackup: backup worker (2) created
pg_basebackup: backup worker (3) created
pg_basebackup: backup worker (4) created
pg_basebackup: write-ahead log end point: 0/2000100
pg_basebackup: error: could not get data for 'BUILD_MANIFEST': ERROR:
 could not open file
"base/pgsql_tmp/pgsql_tmp_b4ef5ac0fd150b2a28caf626bbb1bef2.1": No such file
or directory
pg_basebackup: removing contents of data directory
"/home/edb/Desktop/backup/"
[edb@localhost bin]$

Thanks

On Tue, Apr 14, 2020 at 5:33 PM Asif Rehman  wrote:

>
>
> On Wed, Apr 8, 2020 at 6:53 PM Kashif Zeeshan <
> kashif.zees...@enterprisedb.com> wrote:
>
>>
>>
>> On Tue, Apr 7, 2020 at 9:44 PM Asif Rehman 
>> wrote:
>>
>>> Hi,
>>>
>>> Thanks, Kashif and Rajkumar. I have fixed the reported issues.
>>>
>>> I have added the shared state as previously described. The new grammar
>>> changes
>>> are as follows:
>>>
>>> START_BACKUP [LABEL ''] [FAST] [MAX_RATE %d]
>>> - This will generate a unique backupid using pg_strong_random(16)
>>> and hex-encoded
>>>   it. which is then returned as the result set.
>>> - It will also create a shared state and add it to the hashtable.
>>> The hash table size is set
>>>   to BACKUP_HASH_SIZE=10, but since hashtable can expand
>>> dynamically, I think it's
>>>   sufficient initial size. max_wal_senders is not used, because it
>>> can be set to quite a
>>>   large values.
>>>
>>> JOIN_BACKUP 'backup_id'
>>> - finds 'backup_id' in hashtable and attaches it to server process.
>>>
>>>
>>> SEND_FILE '(' 'FILE' ')' [NOVERIFY_CHECKSUMS]
>>> - renamed SEND_FILES to SEND_FILE
>>> - removed START_WAL_LOCATION from this because 'startptr' is now
>>> accessible through
>>>   shared state.
>>>
>>> There is no change in other commands:
>>> STOP_BACKUP [NOWAIT]
>>> LIST_TABLESPACES [PROGRESS]
>>> LIST_FILES [TABLESPACE]
>>> LIST_WAL_FILES [START_WAL_LOCATION 'X/X'] [END_WAL_LOCATION 'X/X']
>>>
>>> The current patches (v11) have been rebased to the latest master. The
>>> backup manifest is enabled
>>> by default, so I have disabled it for parallel backup mode and have
>>> generated a warning so that
>>> user is aware of it and not expect it in the backup.
>>>
>>> Hi Asif
>>
>> I have verified the bug fixes, one bug is fixed and working now as
>> expected
>>
>> For the verification of the other bug fixes faced following issues,
>> please have a look.
>>
>>
>> 1) Following bug fixes mentioned below are generating segmentation fault.
>>
>> Please note for reference I have added a description only as steps were
>> given in previous emails of each bug I tried to verify the fix. Backtrace
>> is also added with each case which points to one bug for both the cases.
>>
>> a) The backup failed with errors "error: could not connect to server:
>> could not look up local user ID 1000: Too many open files" when the
>> max_wal_senders was set to 2000.
>>
>>
>> [edb@localhost bin]$ ./pg_basebackup -v -j 1990 -D
>>  /home/edb/Desktop/backup/
>> pg_basebackup: warning: backup manifest is disabled in parallel backup
>> mode
>> pg_basebackup: initiating base backup, waiting for checkpoint to complete
>> pg_basebackup: checkpoint completed
>> pg_basebackup: write-ahead log start point: 0/228 on timeline 1
>> pg_basebackup: starting background WAL receiver
>> pg_basebackup: created temporary replication slot "pg_basebackup_9925"
>> pg_basebackup: backup worker (0) created
>> pg_basebackup: backup worker (1) created
>> pg_basebackup: backup worker (2) created
>> pg_basebackup: backup worker (3) created
>> ….
>> ….
>> pg_basebackup: back

Re: WIP/PoC for parallel backup

2020-04-16 Thread Kashif Zeeshan
On Tue, Apr 14, 2020 at 7:37 PM Asif Rehman  wrote:

>
>
> On Tue, Apr 14, 2020 at 6:32 PM Kashif Zeeshan <
> kashif.zees...@enterprisedb.com> wrote:
>
>> Hi Asif
>>
>> Getting the following error on Parallel backup when --no-manifest option
>> is used.
>>
>> [edb@localhost bin]$
>> [edb@localhost bin]$
>> [edb@localhost bin]$  ./pg_basebackup -v -j 5  -D
>>  /home/edb/Desktop/backup/ --no-manifest
>> pg_basebackup: initiating base backup, waiting for checkpoint to complete
>> pg_basebackup: checkpoint completed
>> pg_basebackup: write-ahead log start point: 0/228 on timeline 1
>> pg_basebackup: starting background WAL receiver
>> pg_basebackup: created temporary replication slot "pg_basebackup_10223"
>> pg_basebackup: backup worker (0) created
>> pg_basebackup: backup worker (1) created
>> pg_basebackup: backup worker (2) created
>> pg_basebackup: backup worker (3) created
>> pg_basebackup: backup worker (4) created
>> pg_basebackup: write-ahead log end point: 0/2000100
>> pg_basebackup: error: could not get data for 'BUILD_MANIFEST': ERROR:
>>  could not open file
>> "base/pgsql_tmp/pgsql_tmp_b4ef5ac0fd150b2a28caf626bbb1bef2.1": No such file
>> or directory
>> pg_basebackup: removing contents of data directory
>> "/home/edb/Desktop/backup/"
>> [edb@localhost bin]$
>>
>
> I forgot to make a check for no-manifest. Fixed. Attached is the updated
> patch.
>
Hi Asif

Verified the fix, thanks.

[edb@localhost bin]$
[edb@localhost bin]$
[edb@localhost bin]$ ./pg_basebackup -v -j 5 -D
/home/edb/Desktop/backup --no-manifest
pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/428 on timeline 1
pg_basebackup: starting background WAL receiver
pg_basebackup: created temporary replication slot "pg_basebackup_27407"
pg_basebackup: backup worker (0) created
pg_basebackup: backup worker (1) created
pg_basebackup: backup worker (2) created
pg_basebackup: backup worker (3) created
pg_basebackup: backup worker (4) created
pg_basebackup: write-ahead log end point: 0/4000100
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: syncing data to disk ...
pg_basebackup: base backup completed
[edb@localhost bin]$
[edb@localhost bin]$ ls /home/edb/Desktop/backup
backup_label  pg_commit_ts  pg_ident.conf  pg_notifypg_snapshots
pg_subtrans  PG_VERSION  postgresql.auto.conf
base  pg_dynshmem   pg_logical pg_replslot  pg_stat
pg_tblspcpg_wal  postgresql.conf
globalpg_hba.conf   pg_multixact   pg_serial    pg_stat_tmp
pg_twophase  pg_xact
[edb@localhost bin]$
[edb@localhost bin]$
[edb@localhost bin]$

Regards
Kashif Zeeshan

>
>
>> Thanks
>>
>> On Tue, Apr 14, 2020 at 5:33 PM Asif Rehman 
>> wrote:
>>
>>>
>>>
>>> On Wed, Apr 8, 2020 at 6:53 PM Kashif Zeeshan <
>>> kashif.zees...@enterprisedb.com> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Apr 7, 2020 at 9:44 PM Asif Rehman 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks, Kashif and Rajkumar. I have fixed the reported issues.
>>>>>
>>>>> I have added the shared state as previously described. The new grammar
>>>>> changes
>>>>> are as follows:
>>>>>
>>>>> START_BACKUP [LABEL ''] [FAST] [MAX_RATE %d]
>>>>> - This will generate a unique backupid using pg_strong_random(16)
>>>>> and hex-encoded
>>>>>   it. which is then returned as the result set.
>>>>> - It will also create a shared state and add it to the hashtable.
>>>>> The hash table size is set
>>>>>   to BACKUP_HASH_SIZE=10, but since hashtable can expand
>>>>> dynamically, I think it's
>>>>>   sufficient initial size. max_wal_senders is not used, because it
>>>>> can be set to quite a
>>>>>   large values.
>>>>>
>>>>> JOIN_BACKUP 'backup_id'
>>>>> - finds 'backup_id' in hashtable and attaches it to server process.
>>>>>
>>>>>
>>>>> SEND_FILE '(' 'FILE' ')' [NOVERIFY_CHECKSUMS]
>>>>> - renamed SEND_FILES to SEND_FILE
>>>>> - removed START_WAL_LOCATION from this because 'startptr' is now
>>>>> accessible through
>>>>>   shared state.
>>

Re: WIP/PoC for parallel backup

2020-04-17 Thread Kashif Zeeshan
On Tue, Apr 14, 2020 at 5:33 PM Asif Rehman  wrote:

>
>
> On Wed, Apr 8, 2020 at 6:53 PM Kashif Zeeshan <
> kashif.zees...@enterprisedb.com> wrote:
>
>>
>>
>> On Tue, Apr 7, 2020 at 9:44 PM Asif Rehman 
>> wrote:
>>
>>> Hi,
>>>
>>> Thanks, Kashif and Rajkumar. I have fixed the reported issues.
>>>
>>> I have added the shared state as previously described. The new grammar
>>> changes
>>> are as follows:
>>>
>>> START_BACKUP [LABEL ''] [FAST] [MAX_RATE %d]
>>> - This will generate a unique backupid using pg_strong_random(16)
>>> and hex-encoded
>>>   it. which is then returned as the result set.
>>> - It will also create a shared state and add it to the hashtable.
>>> The hash table size is set
>>>   to BACKUP_HASH_SIZE=10, but since hashtable can expand
>>> dynamically, I think it's
>>>   sufficient initial size. max_wal_senders is not used, because it
>>> can be set to quite a
>>>   large values.
>>>
>>> JOIN_BACKUP 'backup_id'
>>> - finds 'backup_id' in hashtable and attaches it to server process.
>>>
>>>
>>> SEND_FILE '(' 'FILE' ')' [NOVERIFY_CHECKSUMS]
>>> - renamed SEND_FILES to SEND_FILE
>>> - removed START_WAL_LOCATION from this because 'startptr' is now
>>> accessible through
>>>   shared state.
>>>
>>> There is no change in other commands:
>>> STOP_BACKUP [NOWAIT]
>>> LIST_TABLESPACES [PROGRESS]
>>> LIST_FILES [TABLESPACE]
>>> LIST_WAL_FILES [START_WAL_LOCATION 'X/X'] [END_WAL_LOCATION 'X/X']
>>>
>>> The current patches (v11) have been rebased to the latest master. The
>>> backup manifest is enabled
>>> by default, so I have disabled it for parallel backup mode and have
>>> generated a warning so that
>>> user is aware of it and not expect it in the backup.
>>>
>>> Hi Asif
>>
>> I have verified the bug fixes, one bug is fixed and working now as
>> expected
>>
>> For the verification of the other bug fixes faced following issues,
>> please have a look.
>>
>>
>> 1) Following bug fixes mentioned below are generating segmentation fault.
>>
>> Please note for reference I have added a description only as steps were
>> given in previous emails of each bug I tried to verify the fix. Backtrace
>> is also added with each case which points to one bug for both the cases.
>>
>> a) The backup failed with errors "error: could not connect to server:
>> could not look up local user ID 1000: Too many open files" when the
>> max_wal_senders was set to 2000.
>>
>>
>> [edb@localhost bin]$ ./pg_basebackup -v -j 1990 -D
>>  /home/edb/Desktop/backup/
>> pg_basebackup: warning: backup manifest is disabled in parallel backup
>> mode
>> pg_basebackup: initiating base backup, waiting for checkpoint to complete
>> pg_basebackup: checkpoint completed
>> pg_basebackup: write-ahead log start point: 0/228 on timeline 1
>> pg_basebackup: starting background WAL receiver
>> pg_basebackup: created temporary replication slot "pg_basebackup_9925"
>> pg_basebackup: backup worker (0) created
>> pg_basebackup: backup worker (1) created
>> pg_basebackup: backup worker (2) created
>> pg_basebackup: backup worker (3) created
>> ….
>> ….
>> pg_basebackup: backup worker (1014) created
>> pg_basebackup: backup worker (1015) created
>> pg_basebackup: backup worker (1016) created
>> pg_basebackup: backup worker (1017) created
>> pg_basebackup: error: could not connect to server: could not look up
>> local user ID 1000: Too many open files
>> Segmentation fault
>> [edb@localhost bin]$
>>
>>
>> [edb@localhost bin]$
>> [edb@localhost bin]$ gdb pg_basebackup
>> /tmp/cores/core.pg_basebackup.13219.localhost.localdomain.1586349551
>> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-115.el7
>> Copyright (C) 2013 Free Software Foundation, Inc.
>> License GPLv3+: GNU GPL version 3 or later <
>> http://gnu.org/licenses/gpl.html>
>> This is free software: you are free to change and redistribute it.
>> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
>> and "show warranty" for details.
>> This GDB was configured as "x86_64-redhat-linux-gnu".
>> For bug reporting instructions, 

Re: [PoC] Federated Authn/z with OAUTHBEARER

2025-01-08 Thread Kashif Zeeshan
On Wed, Jan 8, 2025 at 3:21 AM Jacob Champion <
jacob.champ...@enterprisedb.com> wrote:

> On Fri, Dec 20, 2024 at 2:21 PM Daniel Gustafsson  wrote:
> >
> > > On 20 Dec 2024, at 02:00, Jacob Champion <
> jacob.champ...@enterprisedb.com> wrote:
> >
> > Thanks for the new version, I was doing a v39 review but I'll roll that
> over
> > into a v40 review now.
>
> (Sorry for the rug pull!)
>
> > As I was reading I was trying to identify parts can be broken out and
> committed
> > ahead of time.  This not only to trim down size, but mostly to shape the
> final
> > commit into a coherent single commit that brings a single functionality
> > utilizing existing APIs.  Basically I think we should keep generic
> > functionality out of the final commit and keep that focused on OAuth and
> the
> > required APIs and infra.
>
> Sounds good.
>
> > The async auth support seemed like a candidate to go in before the
> rest.  While
> > there won't be any consumers of it, it's also not limited to OAuth.
> What do
> > you think about slicing that off and get in ahead of time?  I took a
> small stab
> > at separating out the generic bits (it includes the
> PG_MAX_AUTH_TOKEN_LENGTH
> > move as well which is unrelated, but could also be committed ahead of
> time)
> > along with some small tweaks on it.
>
> +1 to separating the PG_MAX_... macro move. I will take a closer look
> at the async patch in isolation; there's some work I'm doing to fix a
> bug Kashif (cc'd) found recently, and it has me a bit unsure about my
> chosen order of operations in the async part of fe-connect.c. That
> deserves its own email, but I need to investigate more.
>
Thanks Jacob
Most of the testing with psql is done and working on the remaining test
cases.


>
> Thanks!
> --Jacob
>
>
>


Re: [PoC] Federated Authn/z with OAUTHBEARER

2025-01-19 Thread Kashif Zeeshan
On Tue, Jan 14, 2025 at 6:00 AM Jacob Champion <
jacob.champ...@enterprisedb.com> wrote:

> On Mon, Jan 13, 2025 at 3:21 PM Jacob Champion
>  wrote:
> > Next email will discuss the architectural bug that Kashif found.
>
> Okay, here goes. A standard OAuth connection attempt looks like this
> (oh, I hope Gmail doesn't mangle it):
>
> IssuerUserlibpq   Backend
>   |||
>   |x -> x -> o  [1] Startup Packet
>   ||||
>   ||x <- x  [2] OAUTHBEARER Request
>   ||||
>   ||x -> x  [3] Parameter Discovery
>   ||||
>   ||x <- o  [4] Parameters Stored
>   |||
>   |||
>   |||
>   ||x -> o  [5] New Startup Packet
>   ||||
>   ||x <- x  [6] OAUTHBEARER Request
>   ||||
>   x <- x <> x|
>   x <- x <> x|  [7] OAuth Handshake
>   x <- x <> x|
>   ||||
>   o|x -> x  [8] Send Token
>|||
>| <- x <- x  [9] Connection Established
>|||
>x <> x <> x
>x <> x <> x  [10] Use the DB
>...
>...
>...
>
> When the server first asks for a token via OAUTHBEARER (step 2), the
> client doesn't necessarily know what the server's requirements are for
> a given user. It uses the rest of the doomed OAUTHBEARER exchange to
> store the issuer and scope information in the PGconn (step 3-4), then
> disconnects and sets need_new_connection in PQconnectPoll() so that a
> second connection is immediately opened (step 5). When the OAUTHBEARER
> mechanism takes control the second time, it has everything it needs to
> conduct the login flow with the issuer (step 7). It then sends the
> obtained token to establish a connection (steps 8 onward).
>
> The problem is that step 7 is consuming the authentication_timeout for
> the backend. I'm very good at completing these flows quickly, but if
> you can't complete the browser prompts in time, you will simply not be
> able to log into the server. Which is harsh to say the least. (Imagine
> the pain if the standard psql password prompt timed out.) DBAs can get
> around it by increasing the timeout, obviously, but that doesn't feel
> very good as a solution.
>
> Last week I looked into a fix where libpq would simply try again with
> the stored token if the backend hangs up on it during the handshake,
> but I think that will end up making the UX worse. The token validation
> on the server side isn't going to be instantaneous, so if the client
> is able to complete the token exchange in 59 seconds and send it to
> the backend, there's an excellent chance that the connection is still
> going to be torn down in a way that's indistinguishable from a crash.
> We don't want the two sides to fight for time.
>
> So I think what I'm going to need to do is modify v41-0003 to allow
> the mechanism to politely hang up the connection while the flow is in
> progress. This further decouples the lifetimes of the mechanism and
> the async auth -- the async state now has to live outside of the SASL
> exchange -- but I think it's probably more architecturally sound. Yell
> at me if that sounds unmaintainable or if there's a more obvious fix
> I'm missing.
>
> Huge thanks to Kashif for pointing this out!
>
Thanks Jacob, the latest patch fixed the issues.

>
> --Jacob
>


Re: New committer: Jacob Champion

2025-04-13 Thread Kashif Zeeshan
Congrats Jacob.

On Mon, Apr 14, 2025 at 8:32 AM Ashutosh Bapat 
wrote:

> Hearty congratulations Jacob.
>
> On Mon, Apr 14, 2025 at 6:55 AM Richard Guo 
> wrote:
> >
> > On Sat, Apr 12, 2025 at 5:26 AM Jonathan S. Katz 
> wrote:
> > > The Core Team would like to extend our congratulations to Jacob
> > > Champion, who has accepted an invitation to become our newest
> PostgreSQL
> > > committer.
> > >
> > > Please join us in wishing Jacob much success and few reverts!
> >
> > Congratulations Jacob!  Well deserved!
> >
> > Thanks
> > Richard
> >
> >
>
>
> --
> Best Wishes,
> Ashutosh Bapat
>
>
>