Hi guys!
I solved my issue, that was indeed a permission problem.
Thanks for your help.

2016-06-22 9:27 GMT+02:00 Spico Florin <spicoflo...@gmail.com>:

> Hi!
>  I had a similar issue when the user that submit the job to the spark
> cluster didn't have permission to write into the hdfs. If you have the hdfs
> GUI then you can check which users are and what permissions. Also can in
> hdfs browser:(
> http://stackoverflow.com/questions/27996034/opening-a-hdfs-file-in-browser.
> ) if your folder structure was created.
>
> If you have  yarn, an example ciuld be  spark-submit --class "yopur.class"
> --master yarn-client --num-executors 12 --executor-memory 16g
> --driver-memory 8g --executor-cores 8 <your.jar>.jar
>  hdfs://<namednode>/<your_user>/output
>
> I hope it helps.
>  Regards,\ Florin
>
> On Wed, Jun 22, 2016 at 4:57 AM, Jeff Zhang <zjf...@gmail.com> wrote:
>
>> Please check the driver and executor log, there should be logs about
>> where the data is written.
>>
>>
>>
>> On Wed, Jun 22, 2016 at 2:03 AM, Pierre Villard <
>> pierre.villard...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have a Spark job writing files to HDFS using .saveAsHadoopFile method.
>>>
>>> If I run my job in local/client mode, it works as expected and I get all
>>> my files written in HDFS. However if I change to yarn/cluster mode, I don't
>>> see any error logs (the job is successful) and there is no files written to
>>> HDFS.
>>>
>>> Is there any reason for this behavior? Any thoughts on how to track down
>>> what is happening here?
>>>
>>> Thanks!
>>>
>>> Pierre.
>>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>

Reply via email to