[ 
https://issues.apache.org/jira/browse/IMPALA-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17839873#comment-17839873
 ] 

Michael Smith commented on IMPALA-12997:
----------------------------------------

Ok, I see logs showing that {{INSERT INTO sys.impala_query_log}} was not 
completed when SIGTERM is sent in 
{{test_query_log_table_query_select_dedicate_coordinator}}, which then causes 
the next test - {{test_query_log_table_query_select_mt_dop}} - to fail.
{code}
I0420 18:46:51.047513 3677147 Frontend.java:2138] 
ed4a6e980fa2edd3:ad98a48b00000000] Analyzing query: INSERT INTO 
sys.impala_query_log(cluster_id,query_id,session_id,session_type,hiveserver2_protocol_version,db_user,db_user_connection,db_name,impala_coordinator,query_status,query_state,impala_query_end_state,query_type,network_address,start_time_utc,total_time_ms,query_opts_config,resource_pool,per_host_mem_estimate,dedicated_coord_mem_estimate,per_host_fragment_instances,backends_count,admission_result,cluster_memory_admitted,executor_group,executor_groups,exec_summary,num_rows_fetched,row_materialization_rows_per_sec,row_materialization_time_ms,compressed_bytes_spilled,event_planning_finished,event_submit_for_admission,event_completed_admission,event_all_backends_started,event_rows_available,event_first_row_fetched,event_last_row_fetched,event_unregister_query,read_io_wait_total_ms,read_io_wait_mean_ms,bytes_read_cache_total,bytes_read_total,pernode_peak_mem_min,pernode_peak_mem_max,pernode_peak_mem_mean,sql,plan,tables_queried)
 VALUES 
('','274d6f56fe45a17a:c57fd8e700000000','e942432cb76ae2fc:ff28eafb824adb9f','BEESWAX','','jenkins','jenkins','default','impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27000','OK','FINISHED','FINISHED','QUERY','127.0.0.1:58332',UNIX_MICROS_TO_UTC_TIMESTAMP(1713664009827089),CAST(518.503
 AS 
DECIMAL(18,3)),'TIMEZONE=America/Los_Angeles,CLIENT_IDENTIFIER=custom_cluster/test_query_log.py::TestQueryLogTableBeeswax::()::test_query_log_table_query_select_dedicate_coordinator[protocol:beeswax|exec_option:{\'test_replan\':1\;\'batch_size\':0\;\'num_nodes\':0\;\'disable_codegen_rows_threshold\':5000\;\'disable_codegen\':False','default-pool',37879808,109068288,'impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27000=1,impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27002=1',2,'Admitted
 immediately',146948096,'default','Executor group 1:\n  Verdict: Match\n   - 
MemoryAsk: 36.12 MB (37879808)\n   - MemoryMax: 8589934592.00 GB 
(9223372036854775807)','Operator              #Hosts  #Inst  Avg Time  Max Time 
 #Rows  Est. #Rows  Peak Mem  Est. Peak Mem  Detail               
\n--------------------------------------------------------------------------------------------------------------------------\nF01:ROOT
                   1      1   0.000ns   0.000ns                      4.01 MB    
    4.00 MB                       \n01:EXCHANGE                1      1   
0.000ns   0.000ns      3           2  16.00 KB       16.00 KB  UNPARTITIONED    
    \nF00:EXCHANGE SENDER        1      1   0.000ns   0.000ns                   
   5.34 KB      112.00 KB                       \n00:SCAN HDFS               1  
    1  60.001ms  60.001ms      3           2  99.00 KB       32.00 MB  
functional.tinytable',3,0,CAST(0 AS DECIMAL(18,3)),0,CAST(360.008 AS 
DECIMAL(18,3)),CAST(360.008 AS DECIMAL(18,3)),CAST(360.008 AS 
DECIMAL(18,3)),CAST(380.008 AS DECIMAL(18,3)),CAST(470.01 AS 
DECIMAL(18,3)),CAST(520.011 AS DECIMAL(18,3)),CAST(520.011 AS 
DECIMAL(18,3)),CAST(520.011 AS DECIMAL(18,3)),CAST(60.0013 AS 
DECIMAL(18,3)),CAST(60.0013 AS 
DECIMAL(18,3)),0,38,159352,4247552,2203452,'select * from 
functional.tinytable','----------------\nMax Per-Host Resource Reservation: 
Memory=4.01MB Threads=3\nPer-Host Resource Estimates: Memory=36MB\nDedicated 
Coordinator Resource Estimate: Memory=104MB\nWARNING: The following tables are 
missing relevant table and/or column 
statistics.\nfunctional.tinytable\nAnalyzed query: SELECT * FROM 
functional.tinytable\n\nF01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 
instances=1\n|  Per-Host Resources: mem-estimate=4.02MB mem-reservation=4.00MB 
thread-reservation=1\nPLAN-ROOT SINK\n|  output exprs: functional.tinytable.a, 
functional.tinytable.b\n|  mem-estimate=4.00MB mem-reservation=4.00MB 
spill-buffer=2.00MB thread-reservation=0\n|\n01:EXCHANGE [UNPARTITIONED]\n|  
mem-estimate=16.00KB mem-reservation=0B thread-reservation=0\n|  tuple-ids=0 
row-size=24B cardinality=2\n|  in pipelines: 00(GETNEXT)\n|\nF00:PLAN FRAGMENT 
[RANDOM] hosts=1 instances=1\nPer-Host Resources: mem-estimate=32.11MB 
mem-reservation=8.00KB thread-reservation=2\n00:SCAN HDFS 
[functional.tinytable, RANDOM]\n   HDFS partitions=1/1 files=1 size=38B\n   
stored statistics:\n     table: rows=unavailable size=38B\n     columns: 
unavailable\n   extrapolated-rows=disabled max-scan-range-rows=unavailable\n   
mem-estimate=32.00MB mem-reservation=8.00KB thread-reservation=1\n   
tuple-ids=0 row-size=24B cardinality=2\n   in pipelines: 
00(GETNEXT)\n----------------','functional.tinytable') db: default
I0420 18:46:51.047554 3677147 Frontend.java:2150] 
ed4a6e980fa2edd3:ad98a48b00000000] The original executor group sets from 
executor membership snapshot: [TExecutorGroupSet(curr_num_executors:1, 
expected_num_executors:20, exec_group_name_prefix:)]
I0420 18:46:51.047582 3677147 RequestPoolService.java:200] 
ed4a6e980fa2edd3:ad98a48b00000000] Default pool only, scheduler allocation is 
not specified.
I0420 18:46:51.047616 3677147 Frontend.java:2170] 
ed4a6e980fa2edd3:ad98a48b00000000] A total of 1 executor group sets to be 
considered for auto-scaling: [TExecutorGroupSet(curr_num_executors:1, 
expected_num_executors:20, exec_group_name_prefix:, 
max_mem_limit:9223372036854775807, num_cores_per_executor:2147483647)]
I0420 18:46:51.047667 3677147 Frontend.java:2211] 
ed4a6e980fa2edd3:ad98a48b00000000] Consider executor group set: 
TExecutorGroupSet(curr_num_executors:1, expected_num_executors:20, 
exec_group_name_prefix:, max_mem_limit:9223372036854775807, 
num_cores_per_executor:2147483647) with assumption of 0 cores per node.
I0420 18:46:51.065529 3677147 AnalysisContext.java:514] 
ed4a6e980fa2edd3:ad98a48b00000000] Analysis took 10 ms
I0420 18:46:51.066000 3677147 BaseAuthorizationChecker.java:114] 
ed4a6e980fa2edd3:ad98a48b00000000] Authorization check took 0 ms
I0420 18:46:51.066032 3677147 Frontend.java:2532] 
ed4a6e980fa2edd3:ad98a48b00000000] Analysis and authorization finished.
I0420 18:46:51.103255 3677147 Frontend.java:2428] 
ed4a6e980fa2edd3:ad98a48b00000000] Selected executor group: 
TExecutorGroupSet(curr_num_executors:1, expected_num_executors:20, 
exec_group_name_prefix:, max_mem_limit:9223372036854775807, 
num_cores_per_executor:2147483647), reason: the number of nodes is 1
I0420 18:46:51.110819 3677270 admission-controller.cc:1962] 
ed4a6e980fa2edd3:ad98a48b00000000] Trying to admit 
id=ed4a6e980fa2edd3:ad98a48b00000000 in pool_name=default-pool 
executor_group_name=empty group (using coordinator only) 
per_host_mem_estimate=12.00 MB dedicated_coord_mem_estimate=100.00 MB 
max_requests=-1 max_queued=200 max_mem=-1.00 B is_trivial_query=false
I0420 18:46:51.110836 3677270 admission-controller.cc:1971] 
ed4a6e980fa2edd3:ad98a48b00000000] Stats: agg_num_running=0, agg_num_queued=0, 
agg_mem_reserved=0,  local_host(local_mem_admitted=0, local_trivial_running=0, 
num_admitted_running=0, num_queued=0, backend_mem_reserved=0, topN_query_stats: 
queries=[], total_mem_consumed=0; pool_level_stats: num_running=0, min=0, 
max=0, pool_total_mem=0)
I0420 18:46:51.110849 3677270 admission-controller.cc:1310] 
ed4a6e980fa2edd3:ad98a48b00000000] Admitting query 
id=ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.110863 3677270 impala-server.cc:2391] 
ed4a6e980fa2edd3:ad98a48b00000000] Registering query locations
I0420 18:46:51.110894 3677270 coordinator.cc:153] 
ed4a6e980fa2edd3:ad98a48b00000000] Exec() 
query_id=ed4a6e980fa2edd3:ad98a48b00000000 stmt=INSERT INTO 
sys.impala_query_log(cluster_id,query_id,session_id,session_type,hiveserver2_protocol_version,db_user,db_user_connection,db_name,impala_coordinator,query_status,query_state,impala_query_end_state,query_type,network_address,start_time_utc,total_time_ms,query_opts_config,resource_pool,per_host_mem_estimate,dedicated_coord_mem_estimate,per_host_fragment_instances,backends_count,admission_result,cluster_memory_admitted,executor_group,executor_groups,exec_summary,num_rows_fetched,row_materialization_rows_per_sec,row_materialization_time_ms,compressed_bytes_spilled,event_planning_finished,event_submit_for_admission,event_completed_admission,event_all_backends_started,event_rows_available,event_first_row_fetched,event_last_row_fetched,event_unregister_query,read_io_wait_total_ms,read_io_wait_mean_ms,bytes_read_cache_total,bytes_read_total,pernode_peak_mem_min,pernode_peak_mem_max,pernode_peak_mem_mean,sql,plan,tables_queried)
 VALUES 
('','274d6f56fe45a17a:c57fd8e700000000','e942432cb76ae2fc:ff28eafb824adb9f','BEESWAX','','jenkins','jenkins','default','impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27000','OK','FINISHED','FINISHED','QUERY','127.0.0.1:58332',UNIX_MICROS_TO_UTC_TIMESTAMP(1713664009827089),CAST(518.503
 AS 
DECIMAL(18,3)),'TIMEZONE=America/Los_Angeles,CLIENT_IDENTIFIER=custom_cluster/test_query_log.py::TestQueryLogTableBeeswax::()::test_query_log_table_query_select_dedicate_coordinator[protocol:beeswax|exec_option:{\'test_replan\':1\;\'batch_size\':0\;\'num_nodes\':0\;\'disable_codegen_rows_threshold\':5000\;\'disable_codegen\':False','default-pool',37879808,109068288,'impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27000=1,impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27002=1',2,'Admitted
 immediately',146948096,'default','Executor group 1:\n  Verdict: Match\n   - 
MemoryAsk: 36.12 MB (37879808)\n   - MemoryMax: 8589934592.00 GB 
(9223372036854775807)','Operator              #Hosts  #Inst  Avg Time  Max Time 
 #Rows  Est. #Rows  Peak Mem  Est. Peak Mem  Detail               
\n--------------------------------------------------------------------------------------------------------------------------\nF01:ROOT
                   1      1   0.000ns   0.000ns                      4.01 MB    
    4.00 MB                       \n01:EXCHANGE                1      1   
0.000ns   0.000ns      3           2  16.00 KB       16.00 KB  UNPARTITIONED    
    \nF00:EXCHANGE SENDER        1      1   0.000ns   0.000ns                   
   5.34 KB      112.00 KB                       \n00:SCAN HDFS               1  
    1  60.001ms  60.001ms      3           2  99.00 KB       32.00 MB  
functional.tinytable',3,0,CAST(0 AS DECIMAL(18,3)),0,CAST(360.008 AS 
DECIMAL(18,3)),CAST(360.008 AS DECIMAL(18,3)),CAST(360.008 AS 
DECIMAL(18,3)),CAST(380.008 AS DECIMAL(18,3)),CAST(470.01 AS 
DECIMAL(18,3)),CAST(520.011 AS DECIMAL(18,3)),CAST(520.011 AS 
DECIMAL(18,3)),CAST(520.011 AS DECIMAL(18,3)),CAST(60.0013 AS 
DECIMAL(18,3)),CAST(60.0013 AS 
DECIMAL(18,3)),0,38,159352,4247552,2203452,'select * from 
functional.tinytable','----------------\nMax Per-Host Resource Reservation: 
Memory=4.01MB Threads=3\nPer-Host Resource Estimates: Memory=36MB\nDedicated 
Coordinator Resource Estimate: Memory=104MB\nWARNING: The following tables are 
missing relevant table and/or column 
statistics.\nfunctional.tinytable\nAnalyzed query: SELECT * FROM 
functional.tinytable\n\nF01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 
instances=1\n|  Per-Host Resources: mem-estimate=4.02MB mem-reservation=4.00MB 
thread-reservation=1\nPLAN-ROOT SINK\n|  output exprs: functional.tinytable.a, 
functional.tinytable.b\n|  mem-estimate=4.00MB mem-reservation=4.00MB 
spill-buffer=2.00MB thread-reservation=0\n|\n01:EXCHANGE [UNPARTITIONED]\n|  
mem-estimate=16.00KB mem-reservation=0B thread-reservation=0\n|  tuple-ids=0 
row-size=24B cardinality=2\n|  in pipelines: 00(GETNEXT)\n|\nF00:PLAN FRAGMENT 
[RANDOM] hosts=1 instances=1\nPer-Host Resources: mem-estimate=32.11MB 
mem-reservation=8.00KB thread-reservation=2\n00:SCAN HDFS 
[functional.tinytable, RANDOM]\n   HDFS partitions=1/1 files=1 size=38B\n   
stored statistics:\n     table: rows=unavailable size=38B\n     columns: 
unavailable\n   extrapolated-rows=disabled max-scan-range-rows=unavailable\n   
mem-estimate=32.00MB mem-reservation=8.00KB thread-reservation=1\n   
tuple-ids=0 row-size=24B cardinality=2\n   in pipelines: 
00(GETNEXT)\n----------------','functional.tinytable')
I0420 18:46:51.110941 3677270 coordinator.cc:529] 
ed4a6e980fa2edd3:ad98a48b00000000] starting execution on 1 backends for 
query_id=ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.110977 3677147 impala-server.cc:1483] Query 
ed4a6e980fa2edd3:ad98a48b00000000 has idle timeout of 1s000ms
I0420 18:46:51.111371 3676791 control-service.cc:148] 
ed4a6e980fa2edd3:ad98a48b00000000] ExecQueryFInstances(): 
query_id=ed4a6e980fa2edd3:ad98a48b00000000 
coord=impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27000 
#instances=1
I0420 18:46:51.111544 3677270 coordinator.cc:588] 
ed4a6e980fa2edd3:ad98a48b00000000] started execution on 1 backends for 
query_id=ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.111675 3677271 coordinator.cc:969] Coordinator waiting for 
backends to finish, 1 remaining. query_id=ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.111765 3677273 query-state.cc:984] 
ed4a6e980fa2edd3:ad98a48b00000000] Executing instance. 
instance_id=ed4a6e980fa2edd3:ad98a48b00000000 fragment_idx=0 
per_fragment_instance_idx=0 coord_state_idx=0 #in-flight=1
I0420 18:46:51.249027 3677273 query-state.cc:993] 
ed4a6e980fa2edd3:ad98a48b00000000] Instance completed. 
instance_id=ed4a6e980fa2edd3:ad98a48b00000000 #in-flight=0 status=OK
I0420 18:46:51.249037 3677272 query-state.cc:499] 
ed4a6e980fa2edd3:ad98a48b00000000] UpdateBackendExecState(): last report for 
ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.257818 3676791 coordinator.cc:1119] Backend completed: 
host=impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27000 
remaining=1 query_id=ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.257846 3677271 coordinator.cc:917] Finalizing query: 
ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.257948 3677271 coordinator.cc:794] ExecState: query 
id=ed4a6e980fa2edd3:ad98a48b00000000 execution completed
I0420 18:46:51.257968 3677271 coordinator.cc:1463] Release admission control 
resources for query_id=ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.258080 3677271 client-request-state.cc:1546] Updating metastore 
with 1 altered partitions 
(cluster_id=__HIVE_DEFAULT_PARTITION__/start_time_utc_hour=2024-04-21-01)
I0420 18:46:51.258091 3677271 client-request-state.cc:1573] Executing 
FinalizeDml() using CatalogService
I0420 18:46:51.771235 3677271 ImpaladCatalog.java:232] Adding: 
TABLE:sys.impala_query_log version: 2094 size: 21811
I0420 18:46:51.775816 3677271 BaseMetastoreTableOperations.java:199] Refreshing 
table metadata from new version: 
hdfs://localhost:20500/test-warehouse/sys.db/impala_query_log/metadata/00003-56886cf4-6e5b-4f23-a5a1-de4f4b467264.metadata.json
I0420 18:46:51.779763 3677271 BaseMetastoreCatalog.java:67] Table loaded by 
catalog: ImpalaHiveCatalog.sys.impala_query_log
I0420 18:46:51.780354 3677147 impala-server.cc:1527] UnregisterQuery(): 
query_id=ed4a6e980fa2edd3:ad98a48b00000000
I0420 18:46:51.780367 3677147 coordinator-backend-state.cc:1021] 
query_id=ed4a6e980fa2edd3:ad98a48b00000000 target backend=127.0.0.1:27000: Not 
cancelling because the backend is already done: 
I0420 18:46:51.780372 3677147 coordinator.cc:1085] CancelBackends() 
query_id=ed4a6e980fa2edd3:ad98a48b00000000, tried to cancel 0 backends
I0420 18:46:51.780452 3677147 impala-server.cc:2582] Connection 
4d42528608406246:e4177b27503c0b9c from client in-memory.localhost:0 to server 
internal-server closed. The connection had 1 associated session(s).
I0420 18:46:51.780457 3677147 impala-server.cc:1780] Closing session: 
4d42528608406246:e4177b27503c0b9c
I0420 18:46:51.780462 3677147 impala-server.cc:1836] Closed sessiCaught signal: 
SIGTERM. Daemon will exit.
{code}

> test_query_log tests get stuck trying to write to the log
> ---------------------------------------------------------
>
>                 Key: IMPALA-12997
>                 URL: https://issues.apache.org/jira/browse/IMPALA-12997
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Catalog
>    Affects Versions: Impala 4.4.0
>            Reporter: Michael Smith
>            Assignee: Michael Smith
>            Priority: Major
>
> In some test runs, most tests under test_query_log will start to fail on 
> various conditions like
> {code}
> custom_cluster/test_query_log.py:452: in 
> test_query_log_table_query_select_mt_dop
>     "impala-server.completed-queries.written", 1, 60)
> common/impala_service.py:144: in wait_for_metric_value
>     self.__metric_timeout_assert(metric_name, expected_value, timeout)
> common/impala_service.py:213: in __metric_timeout_assert
>     assert 0, assert_string
> E   AssertionError: Metric impala-server.completed-queries.written did not 
> reach value 1 in 60s.
> E   Dumping debug webpages in JSON format...
> E   Dumped memz JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/memz.json
> E   Dumped metrics JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/metrics.json
> E   Dumped queries JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/queries.json
> E   Dumped sessions JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/sessions.json
> E   Dumped threadz JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/threadz.json
> E   Dumped rpcz JSON to 
> $IMPALA_HOME/logs/metric_timeout_diags_20240410_12:49:04/json/rpcz.json
> E   Dumping minidumps for impalads/catalogds...
> E   Dumped minidump for Impalad PID 3680802
> E   Dumped minidump for Impalad PID 3680805
> E   Dumped minidump for Impalad PID 3680809
> E   Dumped minidump for Catalogd PID 3680732
> {code}
> or
> {code}
> custom_cluster/test_query_log.py:921: in test_query_log_ignored_sqls
>     assert len(sql_results.data) == 1, "query not found in completed queries 
> table"
> E   AssertionError: query not found in completed queries table
> E   assert 0 == 1
> E    +  where 0 = len([])
> E    +    where [] = <tests.common.impala_connection.ImpylaHS2ResultSet 
> object at 0xffffa00cc350>.data
> {code}
> One symptom that seems related to this is INSERT operations into 
> sys.impala_query_log that start "UnregisterQuery()" but never finish (with 
> "Query successfully unregistered").
> We can identify cases like that with
> {code}
> for log in $(ag -l 'INSERT INTO sys.impala_query_log' impalad.*); do echo 
> $log; for qid in $(ag -o '[0-9a-f]*:[0-9a-f]*\] Analyzing query: INSERT INTO 
> sys.impala_query_log' $log | cut -d']' -f1); do if ! ag "Query successfully 
> unregistered: query_id=$qid" $log; then echo "$qid not unregistered"; fi; 
> done; done
> {code}
> A similar case may occur with creating the table too
> {code}
> for log in $(ag -l 'CREATE TABLE IF NOT EXISTS sys.impala_query_log' 
> impalad.impala-ec2-rhel88-m7g-4xlarge-ondemand-0a5a.vpc.cloudera.com.jenkins.log.INFO.20240410-*);
>  do QID=$(ag -o '[0-9a-f]*:[0-9a-f]*\] Analyzing query: INSERT INTO 
> sys.impala_query_log' $log | cut -d']' -f1); echo $log; ag "Query 
> successfully unregistered: query_id=$QID" $log; done
> {code}
> although these frequently fail because the test completes and shuts down 
> Impala before the CREATE TABLE query completes.
> Tracking one of those cases led to catalogd errors that repeated for 1m27s 
> before the test suite restarted catalogd:
> {code}
> W0410 12:48:05.051760 3681790 Tasks.java:456] 
> 6647229faf7637d5:3ec7565b00000000] Retrying task after failure: Waiting for 
> lock on table sys.impala_query_log
> Java exception follows:
> org.apache.iceberg.hive.MetastoreLock$WaitingForLockException: Waiting for 
> lock on table sys.impala_query_log
>       at 
> org.apache.iceberg.hive.MetastoreLock.lambda$acquireLock$1(MetastoreLock.java:217)
>       at 
> org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
>       at 
> org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
>       at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
>       at 
> org.apache.iceberg.hive.MetastoreLock.acquireLock(MetastoreLock.java:209)
>       at org.apache.iceberg.hive.MetastoreLock.lock(MetastoreLock.java:146)
>       at 
> org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:194)
>       at 
> org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:135)
>       at 
> org.apache.iceberg.BaseTransaction.lambda$commitSimpleTransaction$3(BaseTransaction.java:417)
>       at 
> org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
>       at 
> org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
>       at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
>       at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
>       at 
> org.apache.iceberg.BaseTransaction.commitSimpleTransaction(BaseTransaction.java:413)
>       at 
> org.apache.iceberg.BaseTransaction.commitTransaction(BaseTransaction.java:308)
>       at 
> org.apache.impala.service.CatalogOpExecutor.updateCatalogImpl(CatalogOpExecutor.java:7220)
>       at 
> org.apache.impala.service.CatalogOpExecutor.updateCatalog(CatalogOpExecutor.java:6939)
>       at 
> org.apache.impala.service.JniCatalog.lambda$updateCatalog$15(JniCatalog.java:504)
>       at 
> org.apache.impala.service.JniCatalogOp.lambda$execAndSerialize$1(JniCatalogOp.java:90)
>       at org.apache.impala.service.JniCatalogOp.execOp(JniCatalogOp.java:58)
>       at 
> org.apache.impala.service.JniCatalogOp.execAndSerialize(JniCatalogOp.java:89)
>       at 
> org.apache.impala.service.JniCatalogOp.execAndSerialize(JniCatalogOp.java:100)
>       at 
> org.apache.impala.service.JniCatalog.execAndSerialize(JniCatalog.java:245)
>       at 
> org.apache.impala.service.JniCatalog.execAndSerialize(JniCatalog.java:259)
>       at 
> org.apache.impala.service.JniCatalog.updateCatalog(JniCatalog.java:503)
> {code}
> This seems to correspond to the first test failure and persisted through all 
> tests afterwards. So this lock was likely acquired from HMS, then never 
> released.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to