The file gets created on the fly. So I dont know how to make sure that its
accessible to all nodes.

On Mon, Sep 15, 2014 at 10:10 PM, rapelly kartheek <kartheek.m...@gmail.com>
wrote:

> Yes. I have HDFS. My cluster has 5 nodes. When I run the above commands, I
> see that the file gets created in the master node. But, there wont be any
> data written to it.
>
>
> On Mon, Sep 15, 2014 at 10:06 PM, Mohit Jaggi <mohitja...@gmail.com>
> wrote:
>
>> Is this code running in an executor? You need to make sure the file is
>> accessible on ALL executors. One way to do that is to use a distributed
>> filesystem like HDFS or GlusterFS.
>>
>> On Mon, Sep 15, 2014 at 8:51 AM, rapelly kartheek <
>> kartheek.m...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> I am trying to perform some read/write file operations in spark. Somehow
>>> I am neither able to write to a file nor read.
>>>
>>> import java.io._
>>>
>>>       val writer = new PrintWriter(new File("test.txt" ))
>>>
>>>       writer.write("Hello Scala")
>>>
>>>
>>> Can someone please tell me how to perform file I/O in spark.
>>>
>>>
>>
>

Reply via email to