Re: Low resource when upgrading from 1.1.0 to 1.3.0

2015-04-06 Thread Roy.Wang
I also meet the same problem. I deploy and run spark(version:1.3.0) on local
mode. when i run a simple app that counts lines of a file, the console
prints "TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
resources ".
I think my example app doen't need 512M memory.(I start worker with 512M)
omidb, if you have solved this problem, please tell me.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Low-resource-when-upgrading-from-1-1-0-to-1-3-0-tp22379p22387.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Low resource when upgrading from 1.1.0 to 1.3.0

2015-04-05 Thread nsalian
Could you check whether your workers are registered to the Master?
Moreover, also look at the heap size for each Worker.

For reference, could you paste the exact command that you executed?
You mentioned that you changed the script; what is the change?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Low-resource-when-upgrading-from-1-1-0-to-1-3-0-tp22379p22380.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org