Those files are created by the Hadoop API that Spark leverages. Spark does
not directly control that.

You may be able to check with the Hadoop project on whether they are
looking at changing this behavior. I believe it was introduced because S3
at one point required it, though it doesn't anymore.

On Tue, Sep 30, 2014 at 10:43 AM, pouryas <pou...@adbrain.com> wrote:

> I would like to know a way for not adding those $_folder$ files to S3 as
> well. I can go ahead and delete them but it would be nice if Spark handles
> this for you.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/S3-Extra-folder-files-for-every-directory-node-tp15078p15402.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to