Have you tried setting PYTHONPATH?
$ export PYTHONPATH="/path/to/project"
$ spark-submit --master yarn-client /path/to/project/main_script.py
Regards,
Ram
On 16 February 2016 at 15:33, Mohannad Ali wrote:
> Hello Everyone,
>
> I have code inside my project organized in
Hi Gourav,
If your question is how to distribute python package dependencies across
the Spark cluster programmatically? ...here is an example -
$ export
PYTHONPATH='path/to/thrift.zip:path/to/happybase.zip:path/to/your/py/application'
And in code:
Hi All,
Spark 1.5.2 does not seem to be backward compatible with functionality that
was available in earlier versions, at least in 1.3.1 and 1.4.1. It is not
possible to insert overwrite into an existing table that was read as a
DataFrame initially.
Our existing code base has few internal Hive