On Thu, Feb 05, 2026 at 11:19:46AM -0500, Andres Freund wrote:
> It certainly seems better than what we do now.  Still feels pretty grotty and
> error prone to me that we fill the catalog table and then throw the contents
> out.

Before I go any further with this approach, I thought of something else we
could do that I believe is worth considering...

As of commit 3bcfcd815e, the only reason we are dumping any of
pg_largeobject_metadata at all is to avoid an ERROR during COMMENT ON or
SECURITY LABEL ON because the call to LargeObjectExists() in
get_object_address() returns false.  If we bypass that check in
binary-upgrade mode, we can skip dumping pg_largeobject_metadata entirely.

The attached patch passes our existing tests, and it seems to create the
expected binary-upgrade-mode dump files, too.  I haven't updated any of the
comments yet.

-- 
nathan
>From 1fa1890f6bf98b718118f070226d8b102073e165 Mon Sep 17 00:00:00 2001
From: Nathan Bossart <[email protected]>
Date: Thu, 5 Feb 2026 11:26:39 -0600
Subject: [PATCH v2 1/1] fix pg_largeobject_metadata file transfer

---
 src/backend/catalog/objectaddress.c |  2 +-
 src/bin/pg_dump/pg_dump.c           | 53 +++++++++++++----------------
 2 files changed, 24 insertions(+), 31 deletions(-)

diff --git a/src/backend/catalog/objectaddress.c 
b/src/backend/catalog/objectaddress.c
index 02af64b82c6..1762146c09a 100644
--- a/src/backend/catalog/objectaddress.c
+++ b/src/backend/catalog/objectaddress.c
@@ -1046,7 +1046,7 @@ get_object_address(ObjectType objtype, Node *object,
                                address.classId = LargeObjectRelationId;
                                address.objectId = oidparse(object);
                                address.objectSubId = 0;
-                               if (!LargeObjectExists(address.objectId))
+                               if (!LargeObjectExists(address.objectId) && 
!IsBinaryUpgrade)
                                {
                                        if (!missing_ok)
                                                ereport(ERROR,
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2bebefd0ba2..6bcb2f61fe8 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1129,19 +1129,11 @@ main(int argc, char **argv)
         */
        if (dopt.binary_upgrade && fout->remoteVersion >= 120000)
        {
-               TableInfo  *lo_metadata = 
findTableByOid(LargeObjectMetadataRelationId);
-               TableInfo  *shdepend = findTableByOid(SharedDependRelationId);
+               TableInfo  *shdepend;
 
-               makeTableDataInfo(&dopt, lo_metadata);
+               shdepend = findTableByOid(SharedDependRelationId);
                makeTableDataInfo(&dopt, shdepend);
 
-               /*
-                * Save pg_largeobject_metadata's dump ID for use as a 
dependency for
-                * pg_shdepend and any large object comments/seclabels.
-                */
-               lo_metadata_dumpId = lo_metadata->dataObj->dobj.dumpId;
-               addObjectDependency(&shdepend->dataObj->dobj, 
lo_metadata_dumpId);
-
                /*
                 * Only dump large object shdepend rows for this database.
                 */
@@ -1149,22 +1141,20 @@ main(int argc, char **argv)
                        "AND dbid = (SELECT oid FROM pg_database "
                        "            WHERE datname = current_database())";
 
-               /*
-                * If upgrading from v16 or newer, only dump large objects with
-                * comments/seclabels.  For these upgrades, pg_upgrade can 
copy/link
-                * pg_largeobject_metadata's files (which is usually faster) 
but we
-                * still need to dump LOs with comments/seclabels here so that 
the
-                * subsequent COMMENT and SECURITY LABEL commands work.  
pg_upgrade
-                * can't copy/link the files from older versions because aclitem
-                * (needed by pg_largeobject_metadata.lomacl) changed its 
storage
-                * format in v16.
-                */
-               if (fout->remoteVersion >= 160000)
-                       lo_metadata->dataObj->filtercond = "WHERE oid IN "
-                               "(SELECT objoid FROM pg_description "
-                               "WHERE classoid = " 
CppAsString2(LargeObjectRelationId) " "
-                               "UNION SELECT objoid FROM pg_seclabel "
-                               "WHERE classoid = " 
CppAsString2(LargeObjectRelationId) ")";
+               if (fout->remoteVersion < 160000)
+               {
+                       TableInfo  *lo_metadata;
+
+                       lo_metadata = 
findTableByOid(LargeObjectMetadataRelationId);
+                       makeTableDataInfo(&dopt, lo_metadata);
+
+                       /*
+                        * Save pg_largeobject_metadata's dump ID for use as a 
dependency
+                        * for pg_shdepend and any large object 
comments/seclabels.
+                        */
+                       lo_metadata_dumpId = lo_metadata->dataObj->dobj.dumpId;
+                       addObjectDependency(&shdepend->dataObj->dobj, 
lo_metadata_dumpId);
+               }
        }
 
        /*
@@ -4079,7 +4069,7 @@ getLOs(Archive *fout)
                                 * We should've saved pg_largeobject_metadata's 
dump ID before
                                 * this point.
                                 */
-                               Assert(lo_metadata_dumpId);
+                               Assert(lo_metadata_dumpId || 
fout->remoteVersion >= 160000);
 
                                loinfo->dobj.dump &= ~(DUMP_COMPONENT_DATA | 
DUMP_COMPONENT_ACL | DUMP_COMPONENT_DEFINITION);
 
@@ -4088,9 +4078,12 @@ getLOs(Archive *fout)
                                 * pg_largeobject_metadata so that any large 
object
                                 * comments/seclables are dumped after it.
                                 */
-                               loinfo->dobj.dependencies = (DumpId *) 
pg_malloc(sizeof(DumpId));
-                               loinfo->dobj.dependencies[0] = 
lo_metadata_dumpId;
-                               loinfo->dobj.nDeps = loinfo->dobj.allocDeps = 1;
+                               if (fout->remoteVersion < 160000)
+                               {
+                                       loinfo->dobj.dependencies = (DumpId *) 
pg_malloc(sizeof(DumpId));
+                                       loinfo->dobj.dependencies[0] = 
lo_metadata_dumpId;
+                                       loinfo->dobj.nDeps = 
loinfo->dobj.allocDeps = 1;
+                               }
                        }
                        else
                                loinfo->dobj.dump &= ~DUMP_COMPONENT_DATA;
-- 
2.50.1 (Apple Git-155)

Reply via email to