KaiGai Kohei <kai...@ak.jp.nec.com> wrote:

> > I don't think this is necessarily a good idea.  We might decide to treat
> > both things separately in the future and it having them represented
> > separately in the dump would prove useful.
> 
> I agree. From design perspective, the single section approach is more
> simple than dual section, but its change set is larger than the dual.

OK.


When I tested a custom dump with pg_restore, --clean & --single-transaction
will fail with the new dump format because it always call lo_unlink()
even if the large object doesn't exist. It comes from dumpBlobItem:

! dumpBlobItem(Archive *AH, BlobInfo *binfo)
!       appendPQExpBuffer(dquery, "SELECT lo_unlink(%s);\n", binfo->dobj.name);

The query in DropBlobIfExists() could avoid errors -- should we use it here?
| SELECT lo_unlink(oid) FROM pg_largeobject_metadata WHERE oid = %s;


BTW, --clean option is ambiguous if combined with --data-only. Restoring
large objects fails for the above reason if previous objects don't exist,
but table data are restored *without* truncation of existing data. Will
normal users expect TRUNCATE-before-load for --clean & --data-only cases?

    Present behaviors are;
        Table data    - Appended. (--clean is ignored)
        Large objects - End with an error if object doesn't exist.
    IMO, ideal behaviors are:
        Table data    - Truncate existing data and load new ones.
        Large objects - Work like as MERGE (or REPLACE, UPSERT).

Comments?

Regards,
---
Takahiro Itagaki
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to