Github user srowen commented on the pull request:

    https://github.com/apache/spark/pull/3130#issuecomment-65878693
  
    I am also in the camp that wants to use Spark without `spark-submit`. My 
broad experience is that it's quite possible to programmatically configure 
`SparkContext` yourself. The scripts handle a lot of dealing with the 
environment and putting some simplifying flags and args around things. If you 
have just 1-2 specific deployment environments you need, and don't need the 
simplifications, you can configure the context manually and be on your way. 
    
    The "why" is that the app I am interested in does not have Spark as the 
controlling element. Running it via the Spark script is probably possible but 
just harder, as it gets in the way a bit.
    
    But I definitely do not package Spark/Hadoop classes, but put the cluster's 
copy on the classpath at runtime, with my own light script. I suppose you could 
argue that's just recreating some of `spark-submit` but is an alternative and 
works fine.
    
    The issue here, however, seems to be classpath and version conflict, which 
is different and orthogonal.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to