[ 
https://issues.apache.org/jira/browse/BEAM-313?focusedWorklogId=138312&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-138312
 ]

ASF GitHub Bot logged work on BEAM-313:
---------------------------------------

                Author: ASF GitHub Bot
            Created on: 27/Aug/18 08:02
            Start Date: 27/Aug/18 08:02
    Worklog Time Spent: 10m 
      Work Description: kohlerm commented on issue #401: [BEAM-313] Enable the 
use of an existing spark context with the SparkPipelineRunner
URL: https://github.com/apache/beam/pull/401#issuecomment-416147962
 
 
   Thanks for the quick replies!
   @iemejia  @jbonofre @amitsela  
   All I want to do is to use Beam on top of spark and use the SJS to start it, 
because the Spark cluster I have access to only allows starting jobs with SJS.  
   Is there a code snippet somewhere , how to do this (if it's possible at all)?
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 138312)
    Time Spent: 40m  (was: 0.5h)

> Enable the use of an existing spark context with the SparkPipelineRunner
> ------------------------------------------------------------------------
>
>                 Key: BEAM-313
>                 URL: https://issues.apache.org/jira/browse/BEAM-313
>             Project: Beam
>          Issue Type: New Feature
>          Components: runner-spark
>            Reporter: Abbass Marouni
>            Assignee: Jean-Baptiste Onofré
>            Priority: Major
>             Fix For: 0.3.0-incubating
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> The general use case is that the SparkPipelineRunner creates its own Spark 
> context and uses it for the pipeline execution.
> Another alternative is to provide the SparkPipelineRunner with an existing 
> spark context. This can be interesting for a lot of use cases where the Spark 
> context is managed outside of beam (context reuse, advanced context 
> management, spark job server, ...).
> code sample : 
> https://github.com/amarouni/incubator-beam/commit/fe0bb517bf0ccde07ef5a61f3e44df695b75f076



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to