> On Dec. 17, 2014, 7:06 p.m., Marcelo Vanzin wrote:
> > +1 to Xuefu's comments. The config name also looks very generic, since it's 
> > only applied to a couple of jobs submitted to the client. But I don't have 
> > a good suggestion here.

While getExecutorCount/getJobInfo/getStageInfo, we use JobHandle.get() to wait 
result, so I use SPARK_CLIENT_FUTURE_TIMEOUT here, which means Hive would use 
this setting as timeout value while call JobHandle.get(), it seems more 
reasonable than previous name.


- chengxiang


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29145/#review65348
-----------------------------------------------------------


On Dec. 17, 2014, 6:28 a.m., chengxiang li wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/29145/
> -----------------------------------------------------------
> 
> (Updated Dec. 17, 2014, 6:28 a.m.)
> 
> 
> Review request for hive and Xuefu Zhang.
> 
> 
> Bugs: HIVE-9094
>     https://issues.apache.org/jira/browse/HIVE-9094
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> RemoteHiveSparkClient::getExecutorCount timeout after 5s as Spark cluster has 
> not launched yet
> 1. set the timeout value configurable.
> 2. set default timeout value 60s.
> 3. enable timeout for get spark job info and get spark stage info.
> 
> 
> Diffs
> -----
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 22f052a 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveSparkClientFactory.java 
> 5d6a02c 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/RemoteHiveSparkClient.java 
> e1946d5 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/impl/RemoteSparkJobStatus.java
>  6217de4 
> 
> Diff: https://reviews.apache.org/r/29145/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> chengxiang li
> 
>

Reply via email to