Or multiple volumes. The LOCAL_DIRS (YARN) and SPARK_LOCAL_DIRS (Mesos,
Standalone) environment variables and the spark.local.dir property control
where temporary data is written. The default is /tmp.

See
http://spark.apache.org/docs/latest/configuration.html#runtime-environment
for more details.

Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Wed, Apr 29, 2015 at 6:19 AM, Anshul Singhle <ans...@betaglide.com>
wrote:

> Do you have multiple disks? Maybe your work directory is not in the right
> disk?
>
> On Wed, Apr 29, 2015 at 4:43 PM, Selim Namsi <selim.na...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I'm using spark (1.3.1) MLlib to run random forest algorithm on tfidf
>> output,the training data is a file containing 156060 (size 8.1M).
>>
>> The problem is that when trying to presist a partition into memory and
>> there
>> is not enought memory, the partition is persisted on disk and despite
>> Having
>> 229G of free disk space, I got " No space left on device"..
>>
>> This is how I'm running the program :
>>
>> ./spark-submit --class com.custom.sentimentAnalysis.MainPipeline --master
>> local[2] --driver-memory 5g ml_pipeline.jar labeledTrainData.tsv
>> testData.tsv
>>
>> And this is a part of the log:
>>
>>
>>
>> If you need more informations, please let me know.
>> Thanks
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/java-io-IOException-No-space-left-on-device-tp22702.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to