[ 
https://issues.apache.org/jira/browse/HIVE-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15034578#comment-15034578
 ] 

Jimmy Xiang commented on HIVE-12538:
------------------------------------

bq. Not quite follow.Is there anything special in operation conf for 
SparkSession? And when to set "isSparkConfigUpdated =false " ?
We can set it to false for the session level conf only. So this flag in the 
operation level is totaly ignored, all the time.
Things are a little tricky actually. If we use the session level conf, we could 
miss some non-spark-related settings in the operation level conf.
If we use the operation level conf, we could miss some spark-related settings 
in the session level conf.
Instead of just maintaining a isSparkConfigUpdated flag, probably, we should 
have a separate map to store such changed spark-related settings temporarily.
This map can be reset upon SparkUtilities#getSparkSession() is invoked.

> After set spark related config, SparkSession never get reused
> -------------------------------------------------------------
>
>                 Key: HIVE-12538
>                 URL: https://issues.apache.org/jira/browse/HIVE-12538
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.3.0
>            Reporter: Nemon Lou
>            Assignee: Nemon Lou
>         Attachments: HIVE-12538.1.patch, HIVE-12538.patch
>
>
> Hive on Spark yarn-cluster mode.
> After setting "set spark.yarn.queue=QueueA;" ,
> run the query "select count(*) from test"  3 times and you will find  3 
> different yarn applications.
> Two of the yarn applications in FINISHED & SUCCEEDED state,and one in RUNNING 
> & UNDEFINED state waiting for next work.
> And if you submit one more "select count(*) from test" ,the third one will be 
> in FINISHED & SUCCEEDED state and a new yarn application will start up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to