Hi, I have a Spark job writing files to HDFS using .saveAsHadoopFile method.
If I run my job in local/client mode, it works as expected and I get all my files written in HDFS. However if I change to yarn/cluster mode, I don't see any error logs (the job is successful) and there is no files written to HDFS. Is there any reason for this behavior? Any thoughts on how to track down what is happening here? Thanks! Pierre.