[ 
https://issues.apache.org/jira/browse/SPARK-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15312506#comment-15312506
 ] 

Marco Capuccini commented on SPARK-15729:
-----------------------------------------

[~srowen] I am sorry for reopen the issue. So saving to the regular file system 
is allowed only when using the "local" master? I don't understand. The 
documentation is clearly states that saveAsTextFile "Write the elements of the 
dataset as a text file (or set of text files) in a given directory in the local 
filesystem ...". It is not mentioned that the regular file system is not 
allowed in case of previous operation. Also why this same code was working in 
1.4.0?

> saveAsTextFile not working on regular filesystem
> ------------------------------------------------
>
>                 Key: SPARK-15729
>                 URL: https://issues.apache.org/jira/browse/SPARK-15729
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.6.1
>            Reporter: Marco Capuccini
>            Priority: Blocker
>
> I set up a standalone Spark cluster. I don't need HDFS, so I just want to 
> save the files on the regular file system in a distributed manner. For 
> testing purpose, I opened a Spark Shell, and I run the following code.
> sc.parallelize(1 to 100).saveAsTextFile("file:///mnt/volume/test.txt")
> I got no error from this, but if I go to inspect the /mnt/volume/test.txt 
> folder on each node this is what I see:
> On the master (where I launched the spark shell):
> /mnt/volume/test.txt/_SUCCESS
> On the workers:
> /mnt/volume/test.txt/_temporary
> It seems like some failure occurred, but I didn't get any error. Is this a 
> bug, or am I missing something?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to