[ 
https://issues.apache.org/jira/browse/HIVE-16484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317318#comment-16317318
 ] 

Marcelo Vanzin commented on HIVE-16484:
---------------------------------------

Just amending my previous response: if the concern with {{SparkLauncher}} is 
the number of file descriptors because of the extra one used by the launcher 
server connection, using {{InProcessLauncher}} will probably end up decreasing 
the number of fds being used. Instead of potentially 3 fds for a child process 
(pipes for stdin / stdout / stderr), you have one for the socket connection. 
For the normal, child process case, then yes, you just get one extra file 
descriptor. (Or maybe you get even, because I think {{SparkLauncher}} will 
merge stdout and stderr in that case.)

As for why this is better, I think the main advantage will come by using 
{{InProcessLauncher}} eventually, since Hive wouldn't need a separate Spark 
installation to be able to launch Spark apps. It could ship with everything 
ready to run HoS out of the box.

Security can probably become simpler; instead of having to run kinit before 
starting a Spark child process, HS2 could potentially just instantiate 
{{InProcessLauncher}} inside a {{proxyUser.doAs}} call. I haven't actually 
tried that, but that's the general idea of how to use it in a secure env.


> Investigate SparkLauncher for HoS as alternative to bin/spark-submit
> --------------------------------------------------------------------
>
>                 Key: HIVE-16484
>                 URL: https://issues.apache.org/jira/browse/HIVE-16484
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Sahil Takiar
>         Attachments: HIVE-16484.1.patch, HIVE-16484.10.patch, 
> HIVE-16484.2.patch, HIVE-16484.3.patch, HIVE-16484.4.patch, 
> HIVE-16484.5.patch, HIVE-16484.6.patch, HIVE-16484.7.patch, 
> HIVE-16484.8.patch, HIVE-16484.9.patch
>
>
> The {{SparkClientImpl#startDriver}} currently looks for the {{SPARK_HOME}} 
> directory and invokes the {{bin/spark-submit}} script, which spawns a 
> separate process to run the Spark application.
> {{SparkLauncher}} was added in SPARK-4924 and is a programatic way to launch 
> Spark applications.
> I see a few advantages:
> * No need to spawn a separate process to launch a HoS --> lower startup time
> * Simplifies the code in {{SparkClientImpl}} --> easier to debug
> * {{SparkLauncher#startApplication}} returns a {{SparkAppHandle}} which 
> contains some useful utilities for querying the state of the Spark job
> ** It also allows the launcher to specify a list of job listeners



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to