Spark documentation and spark code helps you run your application from
shell. In my company it's not convenient - we run cluster task from code in
our web service. It took me a lot of time to bring as much configuration in
code as I can, because configuration at process start - quite hard in our
realities. I'd like to make some patches and write some documentation,
which bring our practices to Spark. So please help me do it by reviewing
this first patch  - https://github.com/apache/spark/pull/82

-- 



*Sincerely yoursEgor PakhomovScala Developer, Yandex*

Reply via email to