Check the available resources you have (cpu cores & memory ) on master web
ui.
The log you see means the job can't get any resources.



On Wed, Jun 24, 2015 at 5:03 AM, Nizan Grauer <ni...@windward.eu> wrote:

> I'm having 30G per machine
>
> This is the first (and only) job I'm trying to submit. So it's weird that
> for --total-executor-cores=20 it works, and for --total-executor-cores=4
> it doesn't
>
> On Tue, Jun 23, 2015 at 10:46 PM, Igor Berman <igor.ber...@gmail.com>
> wrote:
>
>> probably there are already running jobs there
>> in addition, memory is also a resource, so if you are running 1
>> application that took all your memory and then you are trying to run
>> another application that asks
>> for the memory the cluster doesn't have then the second app wont be
>> running
>>
>> so why are u specifying 22g as executor memory? how much memory you have
>> for each machine?
>>
>> On 23 June 2015 at 09:33, nizang <ni...@windward.eu> wrote:
>>
>>> to give a bit more data on what I'm trying to get -
>>>
>>> I have many tasks I want to run in parallel, so I want each task to catch
>>> small part of the cluster (-> only limited part of my 20 cores in the
>>> cluster)
>>>
>>> I have important tasks that I want them to get 10 cores, and I have small
>>> tasks that I want to run with only 1 or 2 cores)
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-standalone-cluster-resource-management-tp23444p23445.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>

Reply via email to