Re: Failed running Spark ALS

2014-09-19 Thread Nick Pentreath
Have you set spark.local.dir (I think this is the config setting)?

It needs to point to a volume with plenty of space.

By default if I recall it point to /tmp

Sent from my iPhone

> On 19 Sep 2014, at 23:35, "jw.cmu"  wrote:
> 
> I'm trying to run Spark ALS using the netflix dataset but failed due to "No
> space on device" exception. It seems the exception is thrown after the
> training phase. It's not clear to me what is being written and where is the
> output directory.
> 
> I was able to run the same code on the provided test.data dataset.
> 
> I'm new to Spark and I'd like to get some hints for resolving this problem.
> 
> The code I ran was got from
> https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html (the
> Java version).
> 
> Relevant info:
> 
> Spark version: 1.0.2 (Standalone deployment)
> # slaves/workers/exectuors: 8
> Core per worker: 64
> memory per executor: 100g
> 
> Application parameters are left as default.
> 
> 
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Failed-running-Spark-ALS-tp14704.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Failed running Spark ALS

2014-09-19 Thread jw.cmu
I'm trying to run Spark ALS using the netflix dataset but failed due to "No
space on device" exception. It seems the exception is thrown after the
training phase. It's not clear to me what is being written and where is the
output directory.

I was able to run the same code on the provided test.data dataset.

I'm new to Spark and I'd like to get some hints for resolving this problem.

The code I ran was got from
https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html (the
Java version).

Relevant info:

Spark version: 1.0.2 (Standalone deployment)
# slaves/workers/exectuors: 8
Core per worker: 64
memory per executor: 100g

Application parameters are left as default.







--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Failed-running-Spark-ALS-tp14704.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org