This is the output of df -h so as you can see I'm using only one disk
mounted on /

df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda8       276G   34G  229G  13% /none            4.0K     0
4.0K   0% /sys/fs/cgroup
udev            7.8G  4.0K  7.8G   1% /dev
tmpfs           1.6G  1.4M  1.6G   1% /runnone            5.0M     0
5.0M   0% /run/locknone            7.8G   37M  7.8G   1% /run/shmnone
          100M   40K  100M   1% /run/user
/dev/sda1       496M   55M  442M  11% /boot/efi

Also when running the program, I noticed that the Used% disk space related
to the partition mounted on "/" was growing very fast

On Wed, Apr 29, 2015 at 12:19 PM Anshul Singhle <ans...@betaglide.com>
wrote:

> Do you have multiple disks? Maybe your work directory is not in the right
> disk?
>
> On Wed, Apr 29, 2015 at 4:43 PM, Selim Namsi <selim.na...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I'm using spark (1.3.1) MLlib to run random forest algorithm on tfidf
>> output,the training data is a file containing 156060 (size 8.1M).
>>
>> The problem is that when trying to presist a partition into memory and
>> there
>> is not enought memory, the partition is persisted on disk and despite
>> Having
>> 229G of free disk space, I got " No space left on device"..
>>
>> This is how I'm running the program :
>>
>> ./spark-submit --class com.custom.sentimentAnalysis.MainPipeline --master
>> local[2] --driver-memory 5g ml_pipeline.jar labeledTrainData.tsv
>> testData.tsv
>>
>> And this is a part of the log:
>>
>>
>>
>> If you need more informations, please let me know.
>> Thanks
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/java-io-IOException-No-space-left-on-device-tp22702.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>

Reply via email to