Andy:
Can you show complete stack trace ?

Have you checked there are enough free inode on the .129 machine ?

Cheers

> On Sep 23, 2015, at 11:43 PM, Andy Huang <andy.hu...@servian.com.au> wrote:
> 
> Hi Jack,
> 
> Are you writing out to disk? Or it sounds like Spark is spilling to disk (RAM 
> filled up) and it's running out of disk space.
> 
> Cheers
> Andy
> 
>> On Thu, Sep 24, 2015 at 4:29 PM, Jack Yang <j...@uow.edu.au> wrote:
>> Hi folk,
>> 
>>  
>> 
>> I have an issue of graphx. (spark: 1.4.0 + 4 machines + 4G memory + 4 CPU 
>> cores)
>> 
>> Basically, I load data using GraphLoader.edgeListFile mthod and then count 
>> number of nodes using: graph.vertices.count() method.
>> 
>> The problem is :
>> 
>>  
>> 
>> Lost task 11972.0 in stage 6.0 (TID 54585, 192.168.70.129): 
>> java.io.IOException: No space left on device
>> 
>>         at java.io.FileOutputStream.writeBytes(Native Method)
>> 
>>         at java.io.FileOutputStream.write(FileOutputStream.java:345)
>> 
>>  
>> 
>> when I try a small amount of data, the code is working. So I guess the error 
>> comes from the amount of data.
>> 
>> This is how I submit the job:
>> 
>>  
>> 
>> spark-submit --class "myclass"
>> 
>> --master spark://hadoopmaster:7077  (I am using standalone)
>> 
>> --executor-memory 2048M
>> 
>> --driver-java-options "-XX:MaxPermSize=2G" 
>> 
>> --total-executor-cores 4  my.jar
>> 
>>  
>> 
>>  
>> 
>> Any thoughts?
>> 
>> Best regards,
>> 
>> Jack
>> 
> 
> 
> 
> -- 
> Andy Huang | Managing Consultant | Servian Pty Ltd | t: 02 9376 0700 | f: 02 
> 9376 0730| m: 0433221979

Reply via email to