[ https://issues.apache.org/jira/browse/HIVE-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Rui Li updated HIVE-12650: -------------------------- Attachment: HIVE-12650.2.patch Update patch to also improve messages in yarn-cluster mode. Here's the summary of behaviors under these two modes. || ||Error users will see||Will spark-submit be killed after timeout|| |yarn-cluster|Failed to create spark client|Y| |yarn-client|Job hasn't been submitted|N| I think the bottom line here is that when the starving app gets served, the aborted query won't be executed so that resources won't be wasted. > Spark-submit is killed when Hive times out. Killing spark-submit doesn't > cancel AM request. When AM is finally launched, it tries to connect back to > Hive and gets refused. > --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > Key: HIVE-12650 > URL: https://issues.apache.org/jira/browse/HIVE-12650 > Project: Hive > Issue Type: Bug > Affects Versions: 1.1.1, 1.2.1 > Reporter: JoneZhang > Assignee: Rui Li > Attachments: HIVE-12650.1.patch, HIVE-12650.2.patch > > > I think hive.spark.client.server.connect.timeout should be set greater than > spark.yarn.am.waitTime. The default value for > spark.yarn.am.waitTime is 100s, and the default value for > hive.spark.client.server.connect.timeout is 90s, which is not good. We can > increase it to a larger value such as 120s. -- This message was sent by Atlassian JIRA (v6.3.4#6332)