[jira] [Commented] (IMPALA-12831) HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate incremental updates

2024-02-23 Thread Quanlong Huang (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820043#comment-17820043
 ] 

Quanlong Huang commented on IMPALA-12831:
-

To workaround the issue, starts catalogd with 
--enable_incremental_metadata_updates=false.

> HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate 
> incremental updates
> ---
>
> Key: IMPALA-12831
> URL: https://issues.apache.org/jira/browse/IMPALA-12831
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
>
> When enable_incremental_metadata_updates=true (default), catalogd sends 
> incremental partition updates to coordinators, which goes into 
> HdfsTable.toMinimalTCatalogObject():
> {code:java}
>   public TCatalogObject toMinimalTCatalogObject() {
> TCatalogObject catalogObject = super.toMinimalTCatalogObject();
> if (!BackendConfig.INSTANCE.isIncrementalMetadataUpdatesEnabled()) {
>   return catalogObject;
> }
> catalogObject.getTable().setTable_type(TTableType.HDFS_TABLE);
> THdfsTable hdfsTable = new THdfsTable(hdfsBaseDir_, getColumnNames(),
> nullPartitionKeyValue_, nullColumnValue_,
> /*idToPartition=*/ new HashMap<>(),
> /*prototypePartition=*/ new THdfsPartition());
> for (HdfsPartition part : partitionMap_.values()) {
>   hdfsTable.partitions.put(part.getId(), part.toMinimalTHdfsPartition());
> }
> hdfsTable.setHas_full_partitions(false);
> // The minimal catalog object of partitions contain the partition names.
> hdfsTable.setHas_partition_names(true);
> catalogObject.getTable().setHdfs_table(hdfsTable);
> return catalogObject;
>   }{code}
> Accessing table fields without holding the table read lock might be failed by 
> concurrent DDLs. We've saw event-processor failed in processing a RELOAD 
> event that want to invalidates an HdfsTable:
> {noformat}
> E0216 16:23:44.283689   253 MetastoreEventsProcessor.java:899] Unexpected 
> exception received while processing event
> Java exception follows:
> java.util.ConcurrentModificationException
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:911)
>   at java.util.ArrayList$Itr.next(ArrayList.java:861)
>   at org.apache.impala.catalog.Column.toColumnNames(Column.java:148)
>   at org.apache.impala.catalog.Table.getColumnNames(Table.java:844)
>   at 
> org.apache.impala.catalog.HdfsTable.toMinimalTCatalogObject(HdfsTable.java:2132)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.addIncompleteTable(CatalogServiceCatalog.java:2221)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.addIncompleteTable(CatalogServiceCatalog.java:2202)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.invalidateTable(CatalogServiceCatalog.java:2797)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$ReloadEvent.processTableInvalidate(MetastoreEvents.java:2734)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$ReloadEvent.process(MetastoreEvents.java:2656)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$MetastoreEvent.processIfEnabled(MetastoreEvents.java:522)
>   at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:1052)
>   at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:881)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:750){noformat}
> I can reproduce the issue using the following test:
> {code:python}
>   @CustomClusterTestSuite.with_args(
> catalogd_args="--enable_incremental_metadata_updates=true")
>   def test_concurrent_invalidate_metadata_with_refresh(self, unique_database):
> # Create a wide table with some partitions
> tbl = unique_database + ".wide_tbl"
> create_stmt = "create table {} (".format(tbl)
> for i in range(600):
>   create_stmt += "col{} int, ".format(i)
> create_stmt += "col600 int) partitioned by (p int) stored as textfile"
> self.execute_query(create_stmt

[jira] [Commented] (IMPALA-12831) HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate incremental updates

2024-02-27 Thread Quanlong Huang (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17821128#comment-17821128
 ] 

Quanlong Huang commented on IMPALA-12831:
-

Note that this is not just a bug in processing RELOAD events. It's a bug in 
code paths that use HdfsTable.toMinimalTCatalogObject() without holding the 
lock, e.g. in executing INVALIDATE METADATA  commands. When there are 
concurrent DDLs updating the table columns or partition map, the invalidate 
could hit this bug.

> HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate 
> incremental updates
> ---
>
> Key: IMPALA-12831
> URL: https://issues.apache.org/jira/browse/IMPALA-12831
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
>
> When enable_incremental_metadata_updates=true (default), catalogd sends 
> incremental partition updates to coordinators, which goes into 
> HdfsTable.toMinimalTCatalogObject():
> {code:java}
>   public TCatalogObject toMinimalTCatalogObject() {
> TCatalogObject catalogObject = super.toMinimalTCatalogObject();
> if (!BackendConfig.INSTANCE.isIncrementalMetadataUpdatesEnabled()) {
>   return catalogObject;
> }
> catalogObject.getTable().setTable_type(TTableType.HDFS_TABLE);
> THdfsTable hdfsTable = new THdfsTable(hdfsBaseDir_, getColumnNames(),
> nullPartitionKeyValue_, nullColumnValue_,
> /*idToPartition=*/ new HashMap<>(),
> /*prototypePartition=*/ new THdfsPartition());
> for (HdfsPartition part : partitionMap_.values()) {
>   hdfsTable.partitions.put(part.getId(), part.toMinimalTHdfsPartition());
> }
> hdfsTable.setHas_full_partitions(false);
> // The minimal catalog object of partitions contain the partition names.
> hdfsTable.setHas_partition_names(true);
> catalogObject.getTable().setHdfs_table(hdfsTable);
> return catalogObject;
>   }{code}
> Accessing table fields without holding the table read lock might be failed by 
> concurrent DDLs. All workloads that use this method (e.g. INVALIDATE 
> commands) could hit this issue. We've saw event-processor failed in 
> processing a RELOAD event that want to invalidates an HdfsTable:
> {noformat}
> E0216 16:23:44.283689   253 MetastoreEventsProcessor.java:899] Unexpected 
> exception received while processing event
> Java exception follows:
> java.util.ConcurrentModificationException
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:911)
>   at java.util.ArrayList$Itr.next(ArrayList.java:861)
>   at org.apache.impala.catalog.Column.toColumnNames(Column.java:148)
>   at org.apache.impala.catalog.Table.getColumnNames(Table.java:844)
>   at 
> org.apache.impala.catalog.HdfsTable.toMinimalTCatalogObject(HdfsTable.java:2132)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.addIncompleteTable(CatalogServiceCatalog.java:2221)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.addIncompleteTable(CatalogServiceCatalog.java:2202)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.invalidateTable(CatalogServiceCatalog.java:2797)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$ReloadEvent.processTableInvalidate(MetastoreEvents.java:2734)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$ReloadEvent.process(MetastoreEvents.java:2656)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$MetastoreEvent.processIfEnabled(MetastoreEvents.java:522)
>   at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:1052)
>   at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:881)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:750){noformat}
> I can reproduce the issue using the following test:
> {code:python}
>   @CustomClusterTestSuite.with_args(
> catalogd_args="--enable_incremental_metadata_updates=true")
>   def test_concurrent_invalidate_metadata_with_refresh(self, unique_database):
> # Creat

[jira] [Commented] (IMPALA-12831) HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate incremental updates

2024-03-12 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825678#comment-17825678
 ] 

ASF subversion and git services commented on IMPALA-12831:
--

Commit ab6c9467f6347671b971dbce4c640bea032b6ed9 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ab6c9467f ]

IMPALA-12831: Fix HdfsTable.toMinimalTCatalogObject() failed by concurrent 
modification

HdfsTable.toMinimalTCatalogObject() is not always invoked with holding
the table lock, e.g. in invalidating a table, we could replace an
HdfsTable instance with an IncompleteTable instance. We then invoke
HdfsTable.toMinimalTCatalogObject() to get the removed catalog object.
However, the HdfsTable instance could be modified in the meantime by a
concurrent DDL/DML that would reload it, e.g. a REFRESH statement. This
causes HdfsTable.toMinimalTCatalogObject() failed by
ConcurrentModificationException on the column/partition list.

This patch fixes the issue by try acquiring the table read lock in
HdfsTable.toMinimalTCatalogObject(). If it fails, the partition ids and
names won't be returned. Also updates the method to not collecting the
column list since it's unused.

Tests
 - Added e2e test
 - Ran CORE tests

Change-Id: Ie9f8e65c0bd24000241eedf8ca765c1e4e14fdb3
Reviewed-on: http://gerrit.cloudera.org:8080/21072
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate 
> incremental updates
> ---
>
> Key: IMPALA-12831
> URL: https://issues.apache.org/jira/browse/IMPALA-12831
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
>
> When enable_incremental_metadata_updates=true (default), catalogd sends 
> incremental partition updates to coordinators, which goes into 
> HdfsTable.toMinimalTCatalogObject():
> {code:java}
>   public TCatalogObject toMinimalTCatalogObject() {
> TCatalogObject catalogObject = super.toMinimalTCatalogObject();
> if (!BackendConfig.INSTANCE.isIncrementalMetadataUpdatesEnabled()) {
>   return catalogObject;
> }
> catalogObject.getTable().setTable_type(TTableType.HDFS_TABLE);
> THdfsTable hdfsTable = new THdfsTable(hdfsBaseDir_, getColumnNames(),
> nullPartitionKeyValue_, nullColumnValue_,
> /*idToPartition=*/ new HashMap<>(),
> /*prototypePartition=*/ new THdfsPartition());
> for (HdfsPartition part : partitionMap_.values()) {
>   hdfsTable.partitions.put(part.getId(), part.toMinimalTHdfsPartition());
> }
> hdfsTable.setHas_full_partitions(false);
> // The minimal catalog object of partitions contain the partition names.
> hdfsTable.setHas_partition_names(true);
> catalogObject.getTable().setHdfs_table(hdfsTable);
> return catalogObject;
>   }{code}
> Accessing table fields without holding the table read lock might be failed by 
> concurrent DDLs. All workloads that use this method (e.g. INVALIDATE 
> commands) could hit this issue. We've saw event-processor failed in 
> processing a RELOAD event that want to invalidates an HdfsTable:
> {noformat}
> E0216 16:23:44.283689   253 MetastoreEventsProcessor.java:899] Unexpected 
> exception received while processing event
> Java exception follows:
> java.util.ConcurrentModificationException
>   at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:911)
>   at java.util.ArrayList$Itr.next(ArrayList.java:861)
>   at org.apache.impala.catalog.Column.toColumnNames(Column.java:148)
>   at org.apache.impala.catalog.Table.getColumnNames(Table.java:844)
>   at 
> org.apache.impala.catalog.HdfsTable.toMinimalTCatalogObject(HdfsTable.java:2132)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.addIncompleteTable(CatalogServiceCatalog.java:2221)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.addIncompleteTable(CatalogServiceCatalog.java:2202)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.invalidateTable(CatalogServiceCatalog.java:2797)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$ReloadEvent.processTableInvalidate(MetastoreEvents.java:2734)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$ReloadEvent.process(MetastoreEvents.java:2656)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$MetastoreEvent.processIfEnabled(MetastoreEvents.java:522)
>   at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:1052)
>   at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(Meta

[jira] [Commented] (IMPALA-12831) HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate incremental updates

2024-05-06 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844101#comment-17844101
 ] 

ASF subversion and git services commented on IMPALA-12831:
--

Commit ee21427d26620b40d38c706b4944d2831f84f6f5 in impala's branch 
refs/heads/master from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=ee21427d2 ]

IMPALA-13009: Fix catalogd not sending deletion updates for some dropped 
partitions

*Background*

Since IMPALA-3127, catalogd sends incremental partition updates based on
the last sent table snapshot ('maxSentPartitionId_' to be specific).
Dropped partitions since the last catalog update are tracked in
'droppedPartitions_' of HdfsTable. When catalogd collects the next
catalog update, they will be collected. HdfsTable then clears the set.
See details in CatalogServiceCatalog#addHdfsPartitionsToCatalogDelta().

If an HdfsTable is invalidated, it's replaced with an IncompleteTable
which doesn't track any partitions. The HdfsTable object is then added
to the deleteLog so catalogd can send deletion updates for all its
partitions. The same if the HdfsTable is dropped. However, the
previously dropped partitions are not collected in this case, which
results in a leak in the catalog topic if the partition name is not
reused anymore. Note that in the catalog topic, the key of a partition
update consists of the table name and the partition name. So if the
partition is added back to the table, the topic key will be reused then
resolves the leak.

The leak will be observed when a coordinator restarts. In the initial
catalog update sent from statestore, coordinator will find some
partition updates that are not referenced by the HdfsTable (assuming the
table is used again after the INVALIDATE). Then a Precondition check
fails and the table is not added to the coordinator.

*Overview of the patch*

This patch fixes the leak by also collecting the dropped partitions when
adding the HdfsTable to the deleteLog. A new field, dropped_partitions,
is added in THdfsTable to collect them. It's only used when catalogd
collects catalog updates.

Removes the Precondition check in coordinator and just reports the stale
partitions since IMPALA-12831 could also introduce them.

Also adds a log line in CatalogOpExecutor.alterTableDropPartition() to
show the dropped partition names for better diagnostics.

Tests
 - Added e2e tests

Change-Id: I12a68158dca18ee48c9564ea16b7484c9f5b5d21
Reviewed-on: http://gerrit.cloudera.org:8080/21326
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 


> HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate 
> incremental updates
> ---
>
> Key: IMPALA-12831
> URL: https://issues.apache.org/jira/browse/IMPALA-12831
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.0.0, Impala 4.1.0, Impala 4.2.0, Impala 4.1.1, 
> Impala 4.1.2, Impala 4.3.0
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.4.0
>
>
> When enable_incremental_metadata_updates=true (default), catalogd sends 
> incremental partition updates to coordinators, which goes into 
> HdfsTable.toMinimalTCatalogObject():
> {code:java}
>   public TCatalogObject toMinimalTCatalogObject() {
> TCatalogObject catalogObject = super.toMinimalTCatalogObject();
> if (!BackendConfig.INSTANCE.isIncrementalMetadataUpdatesEnabled()) {
>   return catalogObject;
> }
> catalogObject.getTable().setTable_type(TTableType.HDFS_TABLE);
> THdfsTable hdfsTable = new THdfsTable(hdfsBaseDir_, getColumnNames(),
> nullPartitionKeyValue_, nullColumnValue_,
> /*idToPartition=*/ new HashMap<>(),
> /*prototypePartition=*/ new THdfsPartition());
> for (HdfsPartition part : partitionMap_.values()) {
>   hdfsTable.partitions.put(part.getId(), part.toMinimalTHdfsPartition());
> }
> hdfsTable.setHas_full_partitions(false);
> // The minimal catalog object of partitions contain the partition names.
> hdfsTable.setHas_partition_names(true);
> catalogObject.getTable().setHdfs_table(hdfsTable);
> return catalogObject;
>   }{code}
> Accessing table fields without holding the table read lock might be failed by 
> concurrent DDLs. All workloads that use this method (e.g. INVALIDATE 
> commands) could hit this issue. We've saw event-processor failed in 
> processing a RELOAD event that want to invalidates an HdfsTable:
> {noformat}
> E0216 16:23:44.283689   253 MetastoreEventsProcessor.java:899] Unexpected 
> exception received while processing event
> Java exception follows:
> java.util.ConcurrentModificationException
>   at java

[jira] [Commented] (IMPALA-12831) HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate incremental updates

2024-05-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/IMPALA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844280#comment-17844280
 ] 

ASF subversion and git services commented on IMPALA-12831:
--

Commit 5d32919f46117213249c60574f77e3f9bb66ed90 in impala's branch 
refs/heads/branch-4.4.0 from stiga-huang
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=5d32919f4 ]

IMPALA-13009: Fix catalogd not sending deletion updates for some dropped 
partitions

*Background*

Since IMPALA-3127, catalogd sends incremental partition updates based on
the last sent table snapshot ('maxSentPartitionId_' to be specific).
Dropped partitions since the last catalog update are tracked in
'droppedPartitions_' of HdfsTable. When catalogd collects the next
catalog update, they will be collected. HdfsTable then clears the set.
See details in CatalogServiceCatalog#addHdfsPartitionsToCatalogDelta().

If an HdfsTable is invalidated, it's replaced with an IncompleteTable
which doesn't track any partitions. The HdfsTable object is then added
to the deleteLog so catalogd can send deletion updates for all its
partitions. The same if the HdfsTable is dropped. However, the
previously dropped partitions are not collected in this case, which
results in a leak in the catalog topic if the partition name is not
reused anymore. Note that in the catalog topic, the key of a partition
update consists of the table name and the partition name. So if the
partition is added back to the table, the topic key will be reused then
resolves the leak.

The leak will be observed when a coordinator restarts. In the initial
catalog update sent from statestore, coordinator will find some
partition updates that are not referenced by the HdfsTable (assuming the
table is used again after the INVALIDATE). Then a Precondition check
fails and the table is not added to the coordinator.

*Overview of the patch*

This patch fixes the leak by also collecting the dropped partitions when
adding the HdfsTable to the deleteLog. A new field, dropped_partitions,
is added in THdfsTable to collect them. It's only used when catalogd
collects catalog updates.

Removes the Precondition check in coordinator and just reports the stale
partitions since IMPALA-12831 could also introduce them.

Also adds a log line in CatalogOpExecutor.alterTableDropPartition() to
show the dropped partition names for better diagnostics.

Tests
 - Added e2e tests

Change-Id: I12a68158dca18ee48c9564ea16b7484c9f5b5d21
Reviewed-on: http://gerrit.cloudera.org:8080/21326
Reviewed-by: Impala Public Jenkins 
Tested-by: Impala Public Jenkins 
(cherry picked from commit ee21427d26620b40d38c706b4944d2831f84f6f5)


> HdfsTable.toMinimalTCatalogObject() should hold table read lock to generate 
> incremental updates
> ---
>
> Key: IMPALA-12831
> URL: https://issues.apache.org/jira/browse/IMPALA-12831
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: Impala 4.0.0, Impala 4.1.0, Impala 4.2.0, Impala 4.1.1, 
> Impala 4.1.2, Impala 4.3.0
>Reporter: Quanlong Huang
>Assignee: Quanlong Huang
>Priority: Critical
> Fix For: Impala 4.4.0
>
>
> When enable_incremental_metadata_updates=true (default), catalogd sends 
> incremental partition updates to coordinators, which goes into 
> HdfsTable.toMinimalTCatalogObject():
> {code:java}
>   public TCatalogObject toMinimalTCatalogObject() {
> TCatalogObject catalogObject = super.toMinimalTCatalogObject();
> if (!BackendConfig.INSTANCE.isIncrementalMetadataUpdatesEnabled()) {
>   return catalogObject;
> }
> catalogObject.getTable().setTable_type(TTableType.HDFS_TABLE);
> THdfsTable hdfsTable = new THdfsTable(hdfsBaseDir_, getColumnNames(),
> nullPartitionKeyValue_, nullColumnValue_,
> /*idToPartition=*/ new HashMap<>(),
> /*prototypePartition=*/ new THdfsPartition());
> for (HdfsPartition part : partitionMap_.values()) {
>   hdfsTable.partitions.put(part.getId(), part.toMinimalTHdfsPartition());
> }
> hdfsTable.setHas_full_partitions(false);
> // The minimal catalog object of partitions contain the partition names.
> hdfsTable.setHas_partition_names(true);
> catalogObject.getTable().setHdfs_table(hdfsTable);
> return catalogObject;
>   }{code}
> Accessing table fields without holding the table read lock might be failed by 
> concurrent DDLs. All workloads that use this method (e.g. INVALIDATE 
> commands) could hit this issue. We've saw event-processor failed in 
> processing a RELOAD event that want to invalidates an HdfsTable:
> {noformat}
> E0216 16:23:44.283689   253 MetastoreEventsProcessor.java:899] Unexpected 
> exception received while processing event
> Java exc