回复:Re: 回复:Re: speed up pg_upgrade with large number of tables

2024-07-08 Thread ()
> Thanks! Since you mentioned that you have multiple databases with 1M+ > databases, you might also be interested in commit 2329cad. That should > speed up the pg_dump step quite a bit. Wow, I noticed this commit(2329cad) when it appeared in commitfest. It has doubled the speed of pg_dump in this s

回复:Re: speed up pg_upgrade with large number of tables

2024-07-05 Thread ()
> > So, I'm thinking, why not add a "--skip-check" option in pg_upgrade to skip > > it? > > See "1-Skip_Compatibility_Check_v1.patch". > > How would a user know that nothing has changed in the cluster between running > the check and running the upgrade with a skipped check? Considering how > comp

speed up pg_upgrade with large number of tables

2024-07-05 Thread ()
Hello postgres hackers: I am recently working on speeding up pg_upgrade for database with over a million tables and would like to share some (maybe) optimizeable or interesting findings. 1: Skip Compatibility Check In "pg_upgrade" = Concisely, we've got

The presence of a NULL "defaclacl" value in pg_default_acl prevents the dropping of a role.

2024-01-02 Thread ()
Hello postgres hackers: I recently came across a scenario involving system catalog "pg_default_acl" where a tuple contains a NULL value for the "defaclacl" attribute. This can cause confusion while dropping a role whose default ACL has been changed. Here is a way to reproduce that: ``` example po

function "cursor_to_xmlschema" causes a crash

2023-09-18 Thread ()
Hello postgres hackers: I recently notice that function "cursor_to_xmlschema" can lead to a crash if the cursor parameter points to the query itself. Here is an example: postgres=# SELECT cursor_to_xmlschema('' :: refcursor, TRUE , FALSE , 'xxx' ) into temp; server closed the connection unexpecte