I wrote:
> I think that the most intellectually rigorous solution is to
> generate dummy TABLE DATA objects for partitioned tables, which
> don't actually contain data but merely carry dependencies on
> each of the child tables' TABLE DATA objects.
Here's a draft patch for this. It seems to fix the problem in
light testing. Some notes:
* Quite a lot of the patch is concerned with making various places
treat the new PARTITIONED DATA TOC entry type the same as TABLE DATA.
I considered removing that distinction and representing a partitioned
table's data object as TABLE DATA with no dataDumper, but it seems to
me this way is clearer. Maybe others will think differently though;
it'd make for a smaller patch.
* It's annoying that we have to touch _tocEntryRequired's "Special
Case" logic for deciding whether an entry is schema or data, because
that means that old copies of pg_restore will think these entries are
schema and thus ignore them in a data-only restore. But I think it
doesn't matter too much, because in a data-only restore we'd not be
creating indexes or foreign keys, so the scheduling bug isn't really
problematic.
* I'm not quite certain whether identify_locking_dependencies() needs
to treat PARTITIONED DATA dependencies as lockable. I assumed here
that it does, but maybe we don't take out exclusive locks on
partitioned tables during restore?
* I noticed that a --data-only dump of the regression database now
complains:
$ pg_dump --data-only regression >r.dump
pg_dump: warning: there are circular foreign-key constraints on this table:
pg_dump: detail: parted_self_fk
pg_dump: hint: You might not be able to restore the dump without using
--disable-triggers or temporarily dropping the constraints.
pg_dump: hint: Consider using a full dump instead of a --data-only dump to
avoid this problem.
The existing code does not produce this warning, but I think doing so
is correct. The reason we missed the issue before is that
getTableDataFKConstraints ignores tables without a dataObj, so before
this patch it ignored partitioned tables altogether.
Comments?
regards, tom lane
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index f961162f365..a05ff716fe6 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -1988,12 +1988,14 @@ buildTocEntryArrays(ArchiveHandle *AH)
AH->tocsByDumpId[te->dumpId] = te;
/*
- * tableDataId provides the TABLE DATA item's dump ID for each TABLE
- * TOC entry that has a DATA item. We compute this by reversing the
- * TABLE DATA item's dependency, knowing that a TABLE DATA item has
- * just one dependency and it is the TABLE item.
+ * tableDataId provides the DATA item's dump ID for each TABLE TOC
+ * entry that has a TABLE DATA or PARTITIONED DATA item. We compute
+ * this by reversing the DATA item's dependency, knowing that its
+ * first dependency is the TABLE item.
*/
- if (strcmp(te->desc, "TABLE DATA") == 0 && te->nDeps > 0)
+ if (te->nDeps > 0 &&
+ (strcmp(te->desc, "TABLE DATA") == 0 ||
+ strcmp(te->desc, "PARTITIONED DATA") == 0))
{
DumpId tableId = te->dependencies[0];
@@ -2003,7 +2005,7 @@ buildTocEntryArrays(ArchiveHandle *AH)
* item's dump ID, so there should be a place for it in the array.
*/
if (tableId <= 0 || tableId > maxDumpId)
- pg_fatal("bad table dumpId for TABLE DATA item");
+ pg_fatal("bad table dumpId for %s item", te->desc);
AH->tableDataId[tableId] = te->dumpId;
}
@@ -3140,6 +3142,7 @@ _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH)
{
if (strcmp(te->desc, "TABLE") == 0 ||
strcmp(te->desc, "TABLE DATA") == 0 ||
+ strcmp(te->desc, "PARTITIONED DATA") == 0 ||
strcmp(te->desc, "VIEW") == 0 ||
strcmp(te->desc, "FOREIGN TABLE") == 0 ||
strcmp(te->desc, "MATERIALIZED VIEW") == 0 ||
@@ -3194,13 +3197,14 @@ _tocEntryRequired(TocEntry *te, teSection curSection, ArchiveHandle *AH)
if (!te->hadDumper)
{
/*
- * Special Case: If 'SEQUENCE SET' or anything to do with LOs, then it
- * is considered a data entry. We don't need to check for BLOBS or
- * old-style BLOB COMMENTS entries, because they will have hadDumper =
- * true ... but we do need to check new-style BLOB ACLs, comments,
- * etc.
+ * Special Case: If 'PARTITIONED DATA', 'SEQUENCE SET' or anything to
+ * do with LOs, then it is considered a data entry. We don't need to
+ * check for BLOBS or old-style BLOB COMMENTS entries, because they
+ * will have hadDumper = true ... but we do need to check new-style
+ * BLOB ACLs, comments, etc.
*/
- if (strcmp(te->desc, "SEQUENCE SET") == 0 ||
+ if (strcmp(te->desc, "PARTITIONED DATA") == 0 ||
+ strcmp(te->desc, "SEQUENCE SET") == 0 ||
strcmp(te->desc, "BLOB") == 0 ||
strcmp(te->desc, "BLOB METADATA") == 0 ||
(strcmp(te->desc, "ACL") == 0 &&
@@ -4996,14 +5000,14 @@ identify_locking_dependencies(ArchiveHandle *AH, TocEntry *te)
return;
/*
- * We assume the entry requires exclusive lock on each TABLE or TABLE DATA
- * item listed among its dependencies. Originally all of these would have
- * been TABLE items, but repoint_table_dependencies would have repointed
- * them to the TABLE DATA items if those are present (which they might not
- * be, eg in a schema-only dump). Note that all of the entries we are
- * processing here are POST_DATA; otherwise there might be a significant
- * difference between a dependency on a table and a dependency on its
- * data, so that closer analysis would be needed here.
+ * We assume the entry requires exclusive lock on each TABLE, TABLE DATA,
+ * or PARTITIONED DATA item listed among its dependencies. Originally all
+ * of these would have been TABLE items, but repoint_table_dependencies
+ * would have repointed them to the DATA items if those are present (which
+ * they might not be, eg in a schema-only dump). Note that all of the
+ * entries we are processing here are POST_DATA; otherwise there might be
+ * a significant difference between a dependency on a table and a
+ * dependency on its data, so that closer analysis would be needed here.
*/
lockids = (DumpId *) pg_malloc(te->nDeps * sizeof(DumpId));
nlockids = 0;
@@ -5012,8 +5016,9 @@ identify_locking_dependencies(ArchiveHandle *AH, TocEntry *te)
DumpId depid = te->dependencies[i];
if (depid <= AH->maxDumpId && AH->tocsByDumpId[depid] != NULL &&
- ((strcmp(AH->tocsByDumpId[depid]->desc, "TABLE DATA") == 0) ||
- strcmp(AH->tocsByDumpId[depid]->desc, "TABLE") == 0))
+ (strcmp(AH->tocsByDumpId[depid]->desc, "TABLE") == 0 ||
+ strcmp(AH->tocsByDumpId[depid]->desc, "TABLE DATA") == 0 ||
+ strcmp(AH->tocsByDumpId[depid]->desc, "PARTITIONED DATA") == 0))
lockids[nlockids++] = depid;
}
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index 365073b3eae..5e2eaf2d4c6 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -310,7 +310,8 @@ struct _archiveHandle
/* arrays created after the TOC list is complete: */
struct _tocEntry **tocsByDumpId; /* TOCs indexed by dumpId */
- DumpId *tableDataId; /* TABLE DATA ids, indexed by table dumpId */
+ DumpId *tableDataId; /* TABLE DATA and PARTITIONED DATA dumpIds,
+ * indexed by table dumpId */
struct _tocEntry *currToc; /* Used when dumping data */
pg_compress_specification compression_spec; /* Requested specification for
diff --git a/src/bin/pg_dump/pg_backup_custom.c b/src/bin/pg_dump/pg_backup_custom.c
index f7c3af56304..42dda2c4220 100644
--- a/src/bin/pg_dump/pg_backup_custom.c
+++ b/src/bin/pg_dump/pg_backup_custom.c
@@ -25,6 +25,8 @@
*/
#include "postgres_fe.h"
+#include <limits.h>
+
#include "common/file_utils.h"
#include "compress_io.h"
#include "pg_backup_utils.h"
@@ -819,10 +821,10 @@ _ReopenArchive(ArchiveHandle *AH)
/*
* Prepare for parallel restore.
*
- * The main thing that needs to happen here is to fill in TABLE DATA and BLOBS
- * TOC entries' dataLength fields with appropriate values to guide the
- * ordering of restore jobs. The source of said data is format-dependent,
- * as is the exact meaning of the values.
+ * The main thing that needs to happen here is to fill in TABLE DATA,
+ * PARTITIONED_DATA, and BLOBS TOC entries' dataLength fields with appropriate
+ * values to guide the ordering of restore jobs. The source of said data is
+ * format-dependent, as is the exact meaning of the values.
*
* A format module might also choose to do other setup here.
*/
@@ -830,6 +832,7 @@ static void
_PrepParallelRestore(ArchiveHandle *AH)
{
lclContext *ctx = (lclContext *) AH->formatData;
+ bool have_partitioned_data = false;
TocEntry *prev_te = NULL;
lclTocEntry *prev_tctx = NULL;
TocEntry *te;
@@ -843,6 +846,10 @@ _PrepParallelRestore(ArchiveHandle *AH)
{
lclTocEntry *tctx = (lclTocEntry *) te->formatData;
+ /* Track whether there are any PARTITIONED_DATA items */
+ if (!have_partitioned_data && strcmp(te->desc, "PARTITIONED DATA") == 0)
+ have_partitioned_data = true;
+
/*
* Ignore entries without a known data offset; if we were unable to
* seek to rewrite the TOC when creating the archive, this'll be all
@@ -873,6 +880,38 @@ _PrepParallelRestore(ArchiveHandle *AH)
if (endpos > prev_tctx->dataPos)
prev_te->dataLength = endpos - prev_tctx->dataPos;
}
+
+ /*
+ * For PARTITIONED DATA items, add up the sizes of their child objects.
+ * (We couldn't do this earlier, since when we encounter a PARTITIONED
+ * DATA item in the first loop we typically don't know the dataLength of
+ * its last child yet.)
+ */
+ if (have_partitioned_data)
+ {
+ for (te = AH->toc->next; te != AH->toc; te = te->next)
+ {
+ if (strcmp(te->desc, "PARTITIONED DATA") != 0)
+ continue;
+ for (int i = 0; i < te->nDeps; i++)
+ {
+ DumpId depid = te->dependencies[i];
+
+ if (depid <= AH->maxDumpId && AH->tocsByDumpId[depid] != NULL)
+ {
+ pgoff_t childLength = AH->tocsByDumpId[depid]->dataLength;
+
+ te->dataLength += childLength;
+ /* handle overflow -- unlikely except with 32-bit pgoff_t */
+ if (unlikely(te->dataLength < 0))
+ {
+ te->dataLength = INT_MAX;
+ break;
+ }
+ }
+ }
+ }
+ }
}
/*
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index 21b00792a8a..dcb7dfc2ee7 100644
--- a/src/bin/pg_dump/pg_backup_directory.c
+++ b/src/bin/pg_dump/pg_backup_directory.c
@@ -37,6 +37,7 @@
#include "postgres_fe.h"
#include <dirent.h>
+#include <limits.h>
#include <sys/stat.h>
#include "common/file_utils.h"
@@ -722,16 +723,17 @@ setFilePath(ArchiveHandle *AH, char *buf, const char *relativeFilename)
/*
* Prepare for parallel restore.
*
- * The main thing that needs to happen here is to fill in TABLE DATA and BLOBS
- * TOC entries' dataLength fields with appropriate values to guide the
- * ordering of restore jobs. The source of said data is format-dependent,
- * as is the exact meaning of the values.
+ * The main thing that needs to happen here is to fill in TABLE DATA,
+ * PARTITIONED_DATA, and BLOBS TOC entries' dataLength fields with appropriate
+ * values to guide the ordering of restore jobs. The source of said data is
+ * format-dependent, as is the exact meaning of the values.
*
* A format module might also choose to do other setup here.
*/
static void
_PrepParallelRestore(ArchiveHandle *AH)
{
+ bool have_partitioned_data = false;
TocEntry *te;
for (te = AH->toc->next; te != AH->toc; te = te->next)
@@ -740,6 +742,10 @@ _PrepParallelRestore(ArchiveHandle *AH)
char fname[MAXPGPATH];
struct stat st;
+ /* Track whether there are any PARTITIONED_DATA items */
+ if (!have_partitioned_data && strcmp(te->desc, "PARTITIONED DATA") == 0)
+ have_partitioned_data = true;
+
/*
* A dumpable object has set tctx->filename, any other object has not.
* (see _ArchiveEntry).
@@ -784,6 +790,38 @@ _PrepParallelRestore(ArchiveHandle *AH)
if (strcmp(te->desc, "BLOBS") == 0)
te->dataLength *= 1024;
}
+
+ /*
+ * For PARTITIONED DATA items, add up the sizes of their child objects.
+ * (Unlike pg_backup_custom.c, we could theoretically do this within the
+ * previous loop, but it seems best to keep the logic looking the same in
+ * both functions.)
+ */
+ if (have_partitioned_data)
+ {
+ for (te = AH->toc->next; te != AH->toc; te = te->next)
+ {
+ if (strcmp(te->desc, "PARTITIONED DATA") != 0)
+ continue;
+ for (int i = 0; i < te->nDeps; i++)
+ {
+ DumpId depid = te->dependencies[i];
+
+ if (depid <= AH->maxDumpId && AH->tocsByDumpId[depid] != NULL)
+ {
+ pgoff_t childLength = AH->tocsByDumpId[depid]->dataLength;
+
+ te->dataLength += childLength;
+ /* handle overflow -- unlikely except with 32-bit pgoff_t */
+ if (unlikely(te->dataLength < 0))
+ {
+ te->dataLength = INT_MAX;
+ break;
+ }
+ }
+ }
+ }
+ }
}
/*
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c6e6d3b2b86..3c174924770 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -258,6 +258,7 @@ static void prohibit_crossdb_refs(PGconn *conn, const char *dbname,
static NamespaceInfo *findNamespace(Oid nsoid);
static void dumpTableData(Archive *fout, const TableDataInfo *tdinfo);
+static void dumpPartitionedData(Archive *fout, const TableDataInfo *tdinfo);
static void refreshMatViewData(Archive *fout, const TableDataInfo *tdinfo);
static const char *getRoleName(const char *roleoid_str);
static void collectRoleNames(Archive *fout);
@@ -347,6 +348,7 @@ static void getDomainConstraints(Archive *fout, TypeInfo *tyinfo);
static void getTableData(DumpOptions *dopt, TableInfo *tblinfo, int numTables, char relkind);
static void makeTableDataInfo(DumpOptions *dopt, TableInfo *tbinfo);
static void buildMatViewRefreshDependencies(Archive *fout);
+static void buildPartitionedDataDependencies(Archive *fout);
static void getTableDataFKConstraints(void);
static void determineNotNullFlags(Archive *fout, PGresult *res, int r,
TableInfo *tbinfo, int j,
@@ -1076,6 +1078,7 @@ main(int argc, char **argv)
{
getTableData(&dopt, tblinfo, numTables, 0);
buildMatViewRefreshDependencies(fout);
+ buildPartitionedDataDependencies(fout);
if (!dopt.dumpSchema)
getTableDataFKConstraints();
}
@@ -2883,6 +2886,33 @@ dumpTableData(Archive *fout, const TableDataInfo *tdinfo)
destroyPQExpBuffer(clistBuf);
}
+/*
+ * dumpPartitionedData -
+ * dump the contents of a partitioned table
+ *
+ * Actually, this just makes an ArchiveEntry for the table contents.
+ * Furthermore, the ArchiveEntry doesn't really carry any data itself.
+ * What it carries is dependencies on the table data objects for the
+ * partitioned table's immediate children. In that way, a dependency
+ * on the PARTITIONED DATA TOC entry represents a dependency on all
+ * data within the partition hierarchy.
+ */
+static void
+dumpPartitionedData(Archive *fout, const TableDataInfo *tdinfo)
+{
+ TableInfo *tbinfo = tdinfo->tdtable;
+
+ if (tdinfo->dobj.dump & DUMP_COMPONENT_DATA)
+ ArchiveEntry(fout,
+ tdinfo->dobj.catId, /* catalog ID */
+ tdinfo->dobj.dumpId, /* dump ID */
+ ARCHIVE_OPTS(.tag = tbinfo->dobj.name,
+ .namespace = tbinfo->dobj.namespace->dobj.name,
+ .owner = tbinfo->rolname,
+ .description = "PARTITIONED DATA",
+ .section = SECTION_DATA));
+}
+
/*
* refreshMatViewData -
* load or refresh the contents of a single materialized view
@@ -2965,9 +2995,6 @@ makeTableDataInfo(DumpOptions *dopt, TableInfo *tbinfo)
!simple_oid_list_member(&foreign_servers_include_oids,
tbinfo->foreign_server)))
return;
- /* Skip partitioned tables (data in partitions) */
- if (tbinfo->relkind == RELKIND_PARTITIONED_TABLE)
- return;
/* Don't dump data in unlogged tables, if so requested */
if (tbinfo->relpersistence == RELPERSISTENCE_UNLOGGED &&
@@ -2986,6 +3013,8 @@ makeTableDataInfo(DumpOptions *dopt, TableInfo *tbinfo)
tdinfo->dobj.objType = DO_REFRESH_MATVIEW;
else if (tbinfo->relkind == RELKIND_SEQUENCE)
tdinfo->dobj.objType = DO_SEQUENCE_SET;
+ else if (tbinfo->relkind == RELKIND_PARTITIONED_TABLE)
+ tdinfo->dobj.objType = DO_PARTITIONED_DATA;
else
tdinfo->dobj.objType = DO_TABLE_DATA;
@@ -3000,6 +3029,12 @@ makeTableDataInfo(DumpOptions *dopt, TableInfo *tbinfo)
tdinfo->dobj.namespace = tbinfo->dobj.namespace;
tdinfo->tdtable = tbinfo;
tdinfo->filtercond = NULL; /* might get set later */
+
+ /*
+ * The first dependency of any of these objTypes must be their table. We
+ * may add more dependencies later, for example between PARTITIONED DATA
+ * objects and their children, or to enforce foreign key dump order.
+ */
addObjectDependency(&tdinfo->dobj, tbinfo->dobj.dumpId);
/* A TableDataInfo contains data, of course */
@@ -3134,6 +3169,52 @@ buildMatViewRefreshDependencies(Archive *fout)
destroyPQExpBuffer(query);
}
+/*
+ * buildPartitionedDataDependencies -
+ * add dump-order dependencies for partitioned tables' data
+ *
+ * We make all PARTITIONED_DATA objects depend on the TABLE_DATA or
+ * PARTITIONED_DATA objects of the immediate children of the partitioned
+ * table. This might seem like the wrong direction for the dependencies
+ * to run, but it's what we want for controlling restore order. The
+ * PARTITIONED_DATA object will not be considered restorable until after
+ * all the child data objects are restored, and thus a dependency on it
+ * from another object such as an FK constraint will block that object from
+ * being restored until all the data in the partition hierarchy is loaded.
+ */
+static void
+buildPartitionedDataDependencies(Archive *fout)
+{
+ DumpableObject **dobjs;
+ int numObjs;
+ int i;
+
+ /* Search through all the dumpable objects for TableAttachInfos */
+ getDumpableObjects(&dobjs, &numObjs);
+ for (i = 0; i < numObjs; i++)
+ {
+ if (dobjs[i]->objType == DO_TABLE_ATTACH)
+ {
+ TableAttachInfo *attachinfo = (TableAttachInfo *) dobjs[i];
+ TableInfo *parentTbl = attachinfo->parentTbl;
+ TableInfo *partitionTbl = attachinfo->partitionTbl;
+
+ /* Not interesting unless both tables are to be dumped */
+ if (parentTbl->dataObj == NULL ||
+ partitionTbl->dataObj == NULL)
+ continue;
+
+ /*
+ * Okay, make parent table's PARTITIONED_DATA object depend on the
+ * child table's TABLE_DATA or PARTITIONED_DATA object.
+ */
+ addObjectDependency(&parentTbl->dataObj->dobj,
+ partitionTbl->dataObj->dobj.dumpId);
+ }
+ }
+ free(dobjs);
+}
+
/*
* getTableDataFKConstraints -
* add dump-order dependencies reflecting foreign key constraints
@@ -3173,7 +3254,8 @@ getTableDataFKConstraints(void)
/*
* Okay, make referencing table's TABLE_DATA object depend on the
- * referenced table's TABLE_DATA object.
+ * referenced table's TABLE_DATA object. (Either one could be a
+ * PARTITIONED_DATA object, too.)
*/
addObjectDependency(&cinfo->contable->dataObj->dobj,
ftable->dataObj->dobj.dumpId);
@@ -11448,6 +11530,9 @@ dumpDumpableObject(Archive *fout, DumpableObject *dobj)
case DO_TABLE_DATA:
dumpTableData(fout, (const TableDataInfo *) dobj);
break;
+ case DO_PARTITIONED_DATA:
+ dumpPartitionedData(fout, (const TableDataInfo *) dobj);
+ break;
case DO_DUMMY_TYPE:
/* table rowtypes and array types are never dumped separately */
break;
@@ -19723,6 +19808,7 @@ addBoundaryDependencies(DumpableObject **dobjs, int numObjs,
addObjectDependency(preDataBound, dobj->dumpId);
break;
case DO_TABLE_DATA:
+ case DO_PARTITIONED_DATA:
case DO_SEQUENCE_SET:
case DO_LARGE_OBJECT:
case DO_LARGE_OBJECT_DATA:
@@ -19792,8 +19878,8 @@ addBoundaryDependencies(DumpableObject **dobjs, int numObjs,
*
* Just to make things more complicated, there are also "special" dependencies
* such as the dependency of a TABLE DATA item on its TABLE, which we must
- * not rearrange because pg_restore knows that TABLE DATA only depends on
- * its table. In these cases we must leave the dependencies strictly as-is
+ * not rearrange because pg_restore knows that TABLE DATA's first dependency
+ * is its table. In these cases we must leave the dependencies strictly as-is
* even if they refer to not-to-be-dumped objects.
*
* To handle this, the convention is that "special" dependencies are created
diff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h
index b426b5e4736..49c0e489ccf 100644
--- a/src/bin/pg_dump/pg_dump.h
+++ b/src/bin/pg_dump/pg_dump.h
@@ -63,6 +63,7 @@ typedef enum
DO_PROCLANG,
DO_CAST,
DO_TABLE_DATA,
+ DO_PARTITIONED_DATA,
DO_SEQUENCE_SET,
DO_DUMMY_TYPE,
DO_TSPARSER,
@@ -399,6 +400,10 @@ typedef struct _attrDefInfo
bool separate; /* true if must dump as separate item */
} AttrDefInfo;
+/*
+ * TableDataInfo is used for DO_TABLE_DATA, DO_PARTITIONED_DATA,
+ * DO_REFRESH_MATVIEW, and DO_SEQUENCE_SET objTypes.
+ */
typedef struct _tableDataInfo
{
DumpableObject dobj;
diff --git a/src/bin/pg_dump/pg_dump_sort.c b/src/bin/pg_dump/pg_dump_sort.c
index 0b0977788f1..927cf7f7daa 100644
--- a/src/bin/pg_dump/pg_dump_sort.c
+++ b/src/bin/pg_dump/pg_dump_sort.c
@@ -129,6 +129,7 @@ static const int dbObjectTypePriority[] =
[DO_PROCLANG] = PRIO_PROCLANG,
[DO_CAST] = PRIO_CAST,
[DO_TABLE_DATA] = PRIO_TABLE_DATA,
+ [DO_PARTITIONED_DATA] = PRIO_TABLE_DATA,
[DO_SEQUENCE_SET] = PRIO_SEQUENCE_SET,
[DO_DUMMY_TYPE] = PRIO_DUMMY_TYPE,
[DO_TSPARSER] = PRIO_TSPARSER,
@@ -1233,13 +1234,15 @@ repairDependencyLoop(DumpableObject **loop,
}
/*
- * If all the objects are TABLE_DATA items, what we must have is a
- * circular set of foreign key constraints (or a single self-referential
- * table). Print an appropriate complaint and break the loop arbitrarily.
+ * If all the objects are TABLE_DATA or PARTITIONED_DATA items, what we
+ * must have is a circular set of foreign key constraints (or a single
+ * self-referential table). Print an appropriate complaint and break the
+ * loop arbitrarily.
*/
for (i = 0; i < nLoop; i++)
{
- if (loop[i]->objType != DO_TABLE_DATA)
+ if (loop[i]->objType != DO_TABLE_DATA &&
+ loop[i]->objType != DO_PARTITIONED_DATA)
break;
}
if (i >= nLoop)
@@ -1433,6 +1436,11 @@ describeDumpableObject(DumpableObject *obj, char *buf, int bufsize)
"TABLE DATA %s (ID %d OID %u)",
obj->name, obj->dumpId, obj->catId.oid);
return;
+ case DO_PARTITIONED_DATA:
+ snprintf(buf, bufsize,
+ "PARTITIONED DATA %s (ID %d OID %u)",
+ obj->name, obj->dumpId, obj->catId.oid);
+ return;
case DO_SEQUENCE_SET:
snprintf(buf, bufsize,
"SEQUENCE SET %s (ID %d OID %u)",