[ https://issues.apache.org/jira/browse/HIVE-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200248#comment-14200248 ]
Xuefu Zhang edited comment on HIVE-8548 at 11/6/14 2:56 PM: ------------------------------------------------------------ Hi [~chengxiang li], I think nobody is going to deploy HS2 in production with local mode and HS2 embedded mode (embedded in Beeline) should behave like Hive CLI. Thus, I think it might be better to keep them consistent. Based on this, I think "local" should be the default whether it's Hive CLI or HS2, and they actually share the same code path (w.r.t. spark integration). In addition, "local" should refer to local spark context in both cases. As to the concurrentcy problem, we just need some proper documentation. Remote spark context should be used when {{spark.master != local}}. I think his approach makes the implemention simpler with seemingly better usability. We can revist this at a later phase. was (Author: xuefuz): Hi [~chengxiang li], I think nobody is going to deploy HS2 in production with local mode and HS2 embedded mode (embedded in Beeline) should behave like Hive CLI. Thus, I think it might be better to keep them consistent. Based on this, I think "local" should be the default whether it's Hive CLI or HS2, and they actually share the same code path. In addition, "local" should refer to local spark context in both cases. As to the concurrentcy problem, we just need some proper documentation. Remote spark context should be used when {{spark.master != local}}. I think his approach makes the implemention simpler with seemingly better usability. We can revist this at a later phase. > Integrate with remote Spark context after HIVE-8528 [Spark Branch] > ------------------------------------------------------------------ > > Key: HIVE-8548 > URL: https://issues.apache.org/jira/browse/HIVE-8548 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Xuefu Zhang > Assignee: Chengxiang Li > > With HIVE-8528, HiverSever2 should use remote Spark context to submit job and > monitor progress, etc. This is necessary if Hive runs on standalone cluster, > Yarn, or Mesos. If Hive runs with spark.master=local, we should continue > using SparkContext in current way. -- This message was sent by Atlassian JIRA (v6.3.4#6332)