[jira] [Comment Edited] (IMPALA-12997) test_query_log tests get stuck trying to write to the log
[ https://issues.apache.org/jira/browse/IMPALA-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17839873#comment-17839873 ] Michael Smith edited comment on IMPALA-12997 at 4/22/24 10:35 PM: -- Ok, the test prior to failure - {{test_query_log_table_query_select_dedicate_coordinator}} - uses {{impalad_graceful_shutdown=False}}, which presumably then causes {{test_query_log_table_query_select_mt_dop}} to fail. was (Author: JIRAUSER288956): Ok, I see logs showing that {{INSERT INTO sys.impala_query_log}} was not completed when SIGTERM is sent in {{test_query_log_table_query_select_dedicate_coordinator}}, which then causes the next test - {{test_query_log_table_query_select_mt_dop}} - to fail. {code} I0420 18:46:51.047513 3677147 Frontend.java:2138] ed4a6e980fa2edd3:ad98a48b] Analyzing query: INSERT INTO sys.impala_query_log(cluster_id,query_id,session_id,session_type,hiveserver2_protocol_version,db_user,db_user_connection,db_name,impala_coordinator,query_status,query_state,impala_query_end_state,query_type,network_address,start_time_utc,total_time_ms,query_opts_config,resource_pool,per_host_mem_estimate,dedicated_coord_mem_estimate,per_host_fragment_instances,backends_count,admission_result,cluster_memory_admitted,executor_group,executor_groups,exec_summary,num_rows_fetched,row_materialization_rows_per_sec,row_materialization_time_ms,compressed_bytes_spilled,event_planning_finished,event_submit_for_admission,event_completed_admission,event_all_backends_started,event_rows_available,event_first_row_fetched,event_last_row_fetched,event_unregister_query,read_io_wait_total_ms,read_io_wait_mean_ms,bytes_read_cache_total,bytes_read_total,pernode_peak_mem_min,pernode_peak_mem_max,pernode_peak_mem_mean,sql,plan,tables_queried) VALUES ('','274d6f56fe45a17a:c57fd8e7','e942432cb76ae2fc:ff28eafb824adb9f','BEESWAX','','jenkins','jenkins','default','impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27000','OK','FINISHED','FINISHED','QUERY','127.0.0.1:58332',UNIX_MICROS_TO_UTC_TIMESTAMP(1713664009827089),CAST(518.503 AS DECIMAL(18,3)),'TIMEZONE=America/Los_Angeles,CLIENT_IDENTIFIER=custom_cluster/test_query_log.py::TestQueryLogTableBeeswax::()::test_query_log_table_query_select_dedicate_coordinator[protocol:beeswax|exec_option:{\'test_replan\':1\;\'batch_size\':0\;\'num_nodes\':0\;\'disable_codegen_rows_threshold\':5000\;\'disable_codegen\':False','default-pool',37879808,109068288,'impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27000=1,impala-ec2-rhel88-m7g-4xlarge-ondemand-124c.vpc.cloudera.com:27002=1',2,'Admitted immediately',146948096,'default','Executor group 1:\n Verdict: Match\n - MemoryAsk: 36.12 MB (37879808)\n - MemoryMax: 8589934592.00 GB (9223372036854775807)','Operator #Hosts #Inst Avg Time Max Time #Rows Est. #Rows Peak Mem Est. Peak Mem Detail \n--\nF01:ROOT 1 1 0.000ns 0.000ns 4.01 MB 4.00 MB \n01:EXCHANGE1 1 0.000ns 0.000ns 3 2 16.00 KB 16.00 KB UNPARTITIONED \nF00:EXCHANGE SENDER1 1 0.000ns 0.000ns 5.34 KB 112.00 KB \n00:SCAN HDFS 1 1 60.001ms 60.001ms 3 2 99.00 KB 32.00 MB functional.tinytable',3,0,CAST(0 AS DECIMAL(18,3)),0,CAST(360.008 AS DECIMAL(18,3)),CAST(360.008 AS DECIMAL(18,3)),CAST(360.008 AS DECIMAL(18,3)),CAST(380.008 AS DECIMAL(18,3)),CAST(470.01 AS DECIMAL(18,3)),CAST(520.011 AS DECIMAL(18,3)),CAST(520.011 AS DECIMAL(18,3)),CAST(520.011 AS DECIMAL(18,3)),CAST(60.0013 AS DECIMAL(18,3)),CAST(60.0013 AS DECIMAL(18,3)),0,38,159352,4247552,2203452,'select * from functional.tinytable','\nMax Per-Host Resource Reservation: Memory=4.01MB Threads=3\nPer-Host Resource Estimates: Memory=36MB\nDedicated Coordinator Resource Estimate: Memory=104MB\nWARNING: The following tables are missing relevant table and/or column statistics.\nfunctional.tinytable\nAnalyzed query: SELECT * FROM functional.tinytable\n\nF01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1\n| Per-Host Resources: mem-estimate=4.02MB mem-reservation=4.00MB thread-reservation=1\nPLAN-ROOT SINK\n| output exprs: functional.tinytable.a, functional.tinytable.b\n| mem-estimate=4.00MB mem-reservation=4.00MB spill-buffer=2.00MB thread-reservation=0\n|\n01:EXCHANGE [UNPARTITIONED]\n| mem-estimate=16.00KB mem-reservation=0B thread-reservation=0\n| tuple-ids=0 row-size=24B cardinality=2\n| in pipelines: 00(GETNEXT)\n|\nF00:PLAN FRAGMENT [RANDOM] hosts=1 instances=1\nPer-Host Resources: mem-estimate=32.11MB mem-reservation=8.00KB thre
[jira] [Comment Edited] (IMPALA-12997) test_query_log tests get stuck trying to write to the log
[ https://issues.apache.org/jira/browse/IMPALA-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836321#comment-17836321 ] Michael Smith edited comment on IMPALA-12997 at 4/11/24 7:04 PM: - Relevant catalogd logs from when the error first started {code} I0410 12:47:58.355312 3681745 catalog-server.cc:616] A catalog update with 2 entries is assembled. Catalog version: 2135 Last sent catalog version: 2133 I0410 12:47:58.473973 3681750 TAcceptQueueServer.cpp:355] New connection to server CatalogService from client I0410 12:47:58.478902 3681750 TAcceptQueueServer.cpp:355] New connection to server CatalogService from client I0410 12:47:58.491712 3681796 JniUtil.java:166] e040d751d8ffae68:d64122e1] prioritizeLoad request: Prioritize load on table(s): sys.impala_query_log I0410 12:47:58.491765 3681795 JniUtil.java:166] 6041e7dc71f87b5b:9a4ff6f0] prioritizeLoad request: Prioritize load on table(s): sys.impala_query_log I0410 12:47:58.492286 3681795 JniUtil.java:177] 6041e7dc71f87b5b:9a4ff6f0] Finished prioritizeLoad request: Prioritize load on table(s): sys.impala_query_log. Time spent: 1ms I0410 12:47:58.492334 3681796 JniUtil.java:177] e040d751d8ffae68:d64122e1] Finished prioritizeLoad request: Prioritize load on table(s): sys.impala_query_log. Time spent: 1ms I0410 12:47:58.492388 3681327 TableLoadingMgr.java:310] Metadata load request already in progress for table: sys.impala_query_log I0410 12:47:58.494186 3681797 TableLoader.java:81] Loading metadata for: sys.impala_query_log (background load) I0410 12:47:58.494704 3681326 TableLoadingMgr.java:74] Loading metadata for table: sys.impala_query_log I0410 12:47:58.494817 3681326 TableLoadingMgr.java:76] Remaining items in queue: 0. Loads in progress: 1 I0410 12:47:58.536164 3681797 Table.java:248] createEventId_ for table: sys.impala_query_log set to: -1 W0410 12:47:58.575927 3681797 HiveConf.java:6718] HiveConf of name hive.metastore.partitions.parameters.exclude.pattern does not exist I0410 12:47:58.603469 3681797 CatalogUtil.java:302] Loading custom FileIO implementation: org.apache.iceberg.hadoop.HadoopFileIO W0410 12:47:58.681820 3681797 HiveConf.java:6718] HiveConf of name hive.metastore.partitions.parameters.exclude.pattern does not exist I0410 12:47:58.686848 3681797 HiveMetaStoreClient.java:280] HMS client filtering is enabled. I0410 12:47:58.687023 3681797 HiveMetaStoreClient.java:738] Trying to connect to metastore with URI (thrift://localhost:9083) in binary transport mode I0410 12:47:58.687292 3681797 HiveMetaStoreClient.java:769] Opened a connection to metastore, URI (thrift://localhost:9083) current connections: 11 I0410 12:47:58.687708 3681797 HiveMetaStoreClient.java:839] Connected to metastore. I0410 12:47:58.687781 3681797 RetryingMetaStoreClient.java:100] RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.metastore.HiveMetaStoreClient ugi=jenkins (auth:SIMPLE) retries=1 delay=1 lifetime=0 I0410 12:47:58.696296 3681797 BaseMetastoreTableOperations.java:199] Refreshing table metadata from new version: hdfs://localhost:20500/test-warehouse/sys.db/impala_query_log/metadata/4-f507db8e-8d90-476e-b09b-c7c81a608641.metadata.json I0410 12:47:59.041594 3681797 BaseMetastoreCatalog.java:67] Table loaded by catalog: ImpalaHiveCatalog.sys.impala_query_log I0410 12:47:59.066388 3681797 SnapshotScan.java:124] Scanning table ImpalaHiveCatalog.sys.impala_query_log snapshot 4998562745977744358 created at 2024-04-10T19:47:53.382+00:00 with filter true I0410 12:47:59.360441 3681797 LoggingMetricsReporter.java:38] Received metrics report: ScanReport{tableName=ImpalaHiveCatalog.sys.impala_query_log, snapshotId=4998562745977744358, filter=true, schemaId=0, projectedFieldIds=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49], projectedFieldNames=[cluster_id, query_id, session_id, session_type, hiveserver2_protocol_version, db_user, db_user_connection, db_name, impala_coordinator, query_status, query_state, impala_query_end_state, query_type, network_address, start_time_utc, total_time_ms, query_opts_config, resource_pool, per_host_mem_estimate, dedicated_coord_mem_estimate, per_host_fragment_instances, backends_count, admission_result, cluster_memory_admitted, executor_group, executor_groups, exec_summary, num_rows_fetched, row_materialization_rows_per_sec, row_materialization_time_ms, compressed_bytes_spilled, event_planning_finished, event_submit_for_admission, event_completed_admission, event_all_backends_started, event_rows_available, event_first_row_fetched, event_last_row_fetched, event_unregister_query, read_io_wait_total_ms, read_io_wait_mean_ms, bytes_read_cache_total, bytes_read_total, pernode_peak_mem_min, per