case without explain, also take long time to submit

19/06/04 05:56:37 DEBUG SparkSQLOperationManager: Created Operation for
> select count(*) from perf_as_reportads with
> session=org.apache.hive.service.cli.session.HiveSessionImpl@1f30fc84,
> runInBackground=true
> 19/06/04 05:56:37 INFO SparkExecuteStatementOperation: Running query
> 'select count(*) from perf_as_reportads' with
> d3ef425f-ab52-40c5-ab1c-fc1a2e2ac8bf
> 19/06/04 05:56:53 INFO ContextCleaner: Cleaned accumulator 5
> 19/06/04 05:56:53 INFO ContextCleaner: Cleaned accumulator 4
> 19/06/04 05:57:06 INFO ThriftCLIService: Client protocol version:
> HIVE_CLI_SERVICE_PROTOCOL_V8
> 19/06/04 05:57:06 INFO SessionState: Created local directory:
> /tmp/5db11ec6-4c93-4152-a4d2-ce1307005c94_resources
> 19/06/04 05:57:06 INFO SessionState: Created HDFS directory:
> /tmp/spark/spark/5db11ec6-4c93-4152-a4d2-ce1307005c94
> 19/06/04 05:57:06 INFO SessionState: Created local directory:
> /tmp/spark/5db11ec6-4c93-4152-a4d2-ce1307005c94
> 19/06/04 05:57:06 INFO SessionState: Created HDFS directory:
> /tmp/spark/spark/5db11ec6-4c93-4152-a4d2-ce1307005c94/_tmp_space.db
> 19/06/04 05:57:06 INFO HiveSessionImpl: Operation log session directory is
> created: /tmp/spark/operation_logs/5db11ec6-4c93-4152-a4d2-ce1307005c94
> 19/06/04 05:58:06 INFO ThriftCLIService: Client protocol version:
> HIVE_CLI_SERVICE_PROTOCOL_V8
> 19/06/04 05:58:06 INFO SessionState: Created local directory:
> /tmp/23027006-7f2d-4f54-af99-608264a82f51_resources
> 19/06/04 05:58:06 INFO SessionState: Created HDFS directory:
> /tmp/spark/spark/23027006-7f2d-4f54-af99-608264a82f51
> 19/06/04 05:58:06 INFO SessionState: Created local directory:
> /tmp/spark/23027006-7f2d-4f54-af99-608264a82f51
> 19/06/04 05:58:06 INFO SessionState: Created HDFS directory:
> /tmp/spark/spark/23027006-7f2d-4f54-af99-608264a82f51/_tmp_space.db
> 19/06/04 05:58:06 INFO HiveSessionImpl: Operation log session directory is
> created: /tmp/spark/operation_logs/23027006-7f2d-4f54-af99-608264a82f51
> 19/06/04 05:58:17 DEBUG SparkExecuteStatementOperation: == Parsed Logical
> Plan ==
> 'Project [unresolvedalias('count(1), None)]
> +- 'UnresolvedRelation `perf_as_reportads`
> == Analyzed Logical Plan ==
> count(1): bigint
> Aggregate [count(1) AS count(1)#227L]
> +- SubqueryAlias perf_as_reportads
>    +- HiveTableRelation `default`.`perf_as_reportads`,
> org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe,
> [ad_app_id#177, ad_app_object_id#178, ad_clicked#179, ad_duration#180L,
> ad_start_time#181L, app_version#182, campaign_id#183,
> campaign_override#184L, campaign_rate#185, campaign_rate_type#186,
> city#187, completed_view#188, connection#189, country#190, creative_id#191,
> device_height#192L, device_id#193, device_language#194, device_make#195,
> device_model#196, device_user_agent#197, device_width#198L,
> do_not_track#199, event_id#200, ... 24 more fields], [dt#225, hr#226]
> == Optimized Logical Plan ==
> Aggregate [count(1) AS count(1)#227L]
> +- Project
>    +- HiveTableRelation `default`.`perf_as_reportads`,
> org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe,
> [ad_app_id#177, ad_app_object_id#178, ad_clicked#179, ad_duration#180L,
> ad_start_time#181L, app_version#182, campaign_id#183,
> campaign_override#184L, campaign_rate#185, campaign_rate_type#186,
> city#187, completed_view#188, connection#189, country#190, creative_id#191,
> device_height#192L, device_id#193, device_language#194, device_make#195,
> device_model#196, device_user_agent#197, device_width#198L,
> do_not_track#199, event_id#200, ... 24 more fields], [dt#225, hr#226]
> == Physical Plan ==
> *(2) HashAggregate(keys=[], functions=[count(1)], output=[count(1)#227L])
> +- Exchange SinglePartition
>    +- *(1) HashAggregate(keys=[], functions=[partial_count(1)],
> output=[count#230L])
>       +- HiveTableScan HiveTableRelation `default`.`perf_as_reportads`,
> org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe,
> [ad_app_id#177, ad_app_object_id#178, ad_clicked#179, ad_duration#180L,
> ad_start_time#181L, app_version#182, campaign_id#183,
> campaign_override#184L, campaign_rate#185, campaign_rate_type#186,
> city#187, completed_view#188, connection#189, country#190, creative_id#191,
> device_height#192L, device_id#193, device_language#194, device_make#195,
> device_model#196, device_user_agent#197, device_width#198L,
> do_not_track#199, event_id#200, ... 24 more fields], [dt#225, hr#226]
> 19/06/04 05:58:17 INFO CodeGenerator: Code generated in 9.873723 ms
> 19/06/04 05:58:17 INFO CodeGenerator: Code generated in 16.34127 ms
> 19/06/04 05:58:17 INFO CodeGenerator: Code generated in 8.782222 ms
> 19/06/04 05:58:17 INFO MemoryStore: Block broadcast_0 stored as values in
> memory (estimated size 389.5 KB, free 911.9 MB)
> 19/06/04 05:58:17 INFO MemoryStore: Block broadcast_0_piece0 stored as
> bytes in memory (estimated size 33.4 KB, free 911.9 MB)
> 19/06/04 05:58:17 INFO BlockManagerInfo: Added broadcast_0_piece0 in
> memory on ip-172-19-104-48.ec2.internal:32775 (size: 33.4 KB, free: 912.3
> MB)
> 19/06/04 05:58:17 INFO SparkContext: Created broadcast 0 from
> 19/06/04 05:58:18 INFO SQLStdHiveAccessController: Created
> SQLStdHiveAccessController for session context : HiveAuthzSessionContext
> [sessionString=96cb5192-b973-41cc-92a7-5abf45403169, clientType=HIVECLI]
> 19/06/04 05:58:18 WARN SessionState: METASTORE_FILTER_HOOK will be
> ignored, since hive.security.authorization.manager is set to instance of
> HiveAuthorizerFactory.
> 19/06/04 05:58:18 INFO HiveMetaStoreClient: Mestastore configuration
> metastore.filter.hook changed from
> org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl to
> org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook
> 19/06/04 05:58:18 INFO HiveMetaStoreClient: Closed a connection to
> metastore, current connections: 0
> 19/06/04 05:58:18 INFO HiveMetaStoreClient: Trying to connect to metastore
> with URI thrift://ip-172-19-104-48.ec2.internal:9083
> 19/06/04 05:58:18 INFO HiveMetaStoreClient: Opened a connection to
> metastore, current connections: 1
> 19/06/04 05:58:18 INFO HiveMetaStoreClient: Connected to metastore.
> 19/06/04 05:58:18 INFO RetryingMetaStoreClient: RetryingMetaStoreClient
> proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> ugi=spark (auth:SIMPLE) retries=1 delay=5 lifetime=0


On Tue, Jun 4, 2019 at 2:00 PM Jun Zhu <jun....@vungle.com> wrote:

> Hi ,
> Running thrift server on yarn.
> It's fast when beeline client send query to thrift server, but it take a
> while(about 90s) to submit to yarn cluster.
> From Thrift server log:
>
>> *19/06/04 05:48:27* DEBUG SparkSQLOperationManager: Created Operation
>> for explain select count(*) from perf_as_reportads with
>> session=org.apache.hive.service.cli.session.HiveSessionImpl@1f30fc84,
>> runInBackground=true
>> 19/06/04 05:48:27 INFO SparkExecuteStatementOperation: Running query 
>> '*explain
>> select count(*) from perf_as_reportads*' with
>> c63b765f-050a-412d-a817-45d5a990b59d
>> 19/06/04 05:49:06 INFO ThriftCLIService: Client protocol version:
>> HIVE_CLI_SERVICE_PROTOCOL_V8
>> 19/06/04 05:49:06 INFO SessionState: Created local directory:
>> /tmp/384b2bd2-53fd-4300-a0dd-3be6590a6029_resources
>> 19/06/04 05:49:06 INFO SessionState: Created HDFS directory:
>> /tmp/spark/spark/384b2bd2-53fd-4300-a0dd-3be6590a6029
>> 19/06/04 05:49:06 INFO SessionState: Created local directory:
>> /tmp/spark/384b2bd2-53fd-4300-a0dd-3be6590a6029
>> 19/06/04 05:49:06 INFO SessionState: Created HDFS directory:
>> /tmp/spark/spark/384b2bd2-53fd-4300-a0dd-3be6590a6029/_tmp_space.db
>> 19/06/04 05:49:06 INFO HiveSessionImpl: Operation log session directory
>> is created: /tmp/spark/operation_logs/384b2bd2-53fd-4300-a0dd-3be6590a6029
>> 19/06/04 05:50:06 INFO ThriftCLIService: Client protocol version:
>> HIVE_CLI_SERVICE_PROTOCOL_V8
>> 19/06/04 05:50:06 INFO SessionState: Created local directory:
>> /tmp/714c6377-5151-4574-969b-2c1cb2ed0d02_resources
>> 19/06/04 05:50:06 INFO SessionState: Created HDFS directory:
>> /tmp/spark/spark/714c6377-5151-4574-969b-2c1cb2ed0d02
>> 19/06/04 05:50:06 INFO SessionState: Created local directory:
>> /tmp/spark/714c6377-5151-4574-969b-2c1cb2ed0d02
>> 19/06/04 05:50:06 INFO SessionState: Created HDFS directory:
>> /tmp/spark/spark/714c6377-5151-4574-969b-2c1cb2ed0d02/_tmp_space.db
>> 19/06/04 05:50:06 INFO HiveSessionImpl: Operation log session directory
>> is created: /tmp/spark/operation_logs/714c6377-5151-4574-969b-2c1cb2ed0d02
>> *19/06/04 05:50:15* DEBUG SparkExecuteStatementOperation: == Parsed
>> Logical Plan ==
>> ExplainCommand 'Project [unresolvedalias('count(1), None)], false, false,
>> false
>> == Analyzed Logical Plan ==
>> plan: string
>> ExplainCommand 'Project [unresolvedalias('count(1), None)], false, false,
>> false
>> == Optimized Logical Plan ==
>> ExplainCommand 'Project [unresolvedalias('count(1), None)], false, false,
>> false
>> == Physical Plan ==
>> Execute ExplainCommand
>>    +- ExplainCommand 'Project [unresolvedalias('count(1), None)], false,
>> false, false
>> *19/06/04 05:50:15* INFO SparkExecuteStatementOperation: Result Schema:
>> StructType(StructField(plan,StringType,true))
>
>
> Had set thrift server miniresource(10 instance) and initresource(10) on
> yarn.
> Any thought? Any config issue may related?
> --
> [image: vshapesaqua11553186012.gif] <https://vungle.com/>   *Jun Zhu*
> Sr. Engineer I, Data
> +86 18565739171
>
> [image: in1552694272.png] <https://www.linkedin.com/company/vungle>    [image:
> fb1552694203.png] <https://facebook.com/vungle>      [image:
> tw1552694330.png] <https://twitter.com/vungle>      [image:
> ig1552694392.png] <https://www.instagram.com/vungle>
> Units 3801, 3804, 38F, C Block, Beijing Yintai Center, Beijing, China
>
>

-- 
[image: vshapesaqua11553186012.gif] <https://vungle.com/>   *Jun Zhu*
Sr. Engineer I, Data
+86 18565739171

[image: in1552694272.png] <https://www.linkedin.com/company/vungle>    [image:
fb1552694203.png] <https://facebook.com/vungle>      [image:
tw1552694330.png] <https://twitter.com/vungle>      [image:
ig1552694392.png] <https://www.instagram.com/vungle>
Units 3801, 3804, 38F, C Block, Beijing Yintai Center, Beijing, China

Reply via email to