I don't think it is a bug, maybe something wrong with your Spark / Yarn
configurations.

On Tue, Nov 24, 2015 at 12:13 PM, 谢廷稳 <xieting...@gmail.com> wrote:

> OK,the YARN cluster was used by myself,it have 6 node witch can run over
> 100 executor, and the YARN RM logs showed that the Spark application did
> not requested resource from it.
>
> Is this a bug? Should I create a JIRA for this problem?
>
> 2015-11-24 12:00 GMT+08:00 Saisai Shao <sai.sai.s...@gmail.com>:
>
>> OK, so this looks like your Yarn cluster  does not allocate containers
>> which you supposed should be 50. Does the yarn cluster have enough resource
>> after allocating AM container, if not, that is the problem.
>>
>> The problem not lies in dynamic allocation from my guess of your
>> description. I said I'm OK with min and max executors to the same number.
>>
>> On Tue, Nov 24, 2015 at 11:54 AM, 谢廷稳 <xieting...@gmail.com> wrote:
>>
>>> Hi Saisai,
>>> I'm sorry for did not describe it clearly,YARN debug log said I have 50
>>> executors,but ResourceManager showed that I only have 1 container for the
>>> AppMaster.
>>>
>>> I have checked YARN RM logs,after AppMaster changed state from ACCEPTED
>>> to RUNNING,it did not have log about this job any more.So,the problem is I
>>> did not have any executor but ExecutorAllocationManager think I have.Would
>>> you minding having a test in your cluster environment?
>>> Thanks,
>>> Weber
>>>
>>> 2015-11-24 11:00 GMT+08:00 Saisai Shao <sai.sai.s...@gmail.com>:
>>>
>>>> I think this behavior is expected, since you already have 50 executors
>>>> launched, so no need to acquire additional executors. You change is not
>>>> solid, it is just hiding the log.
>>>>
>>>> Again I think you should check the logs of Yarn and Spark to see if
>>>> executors are started correctly. Why resource is still not enough where you
>>>> already have 50 executors.
>>>>
>>>> On Tue, Nov 24, 2015 at 10:48 AM, 谢廷稳 <xieting...@gmail.com> wrote:
>>>>
>>>>> Hi SaiSai,
>>>>> I have changed  "if (numExecutorsTarget >= maxNumExecutors)"  to "if
>>>>> (numExecutorsTarget > maxNumExecutors)" of the first line in the
>>>>> ExecutorAllocationManager#addExecutors() and it rans well.
>>>>> In my opinion,when I was set minExecutors equals maxExecutors,when the
>>>>> first time to add Executors,numExecutorsTarget equals maxNumExecutors and
>>>>> it repeat printe "DEBUG ExecutorAllocationManager: Not adding
>>>>> executors because our current target total is already 50 (limit 50)".
>>>>> Thanks
>>>>> Weber
>>>>>
>>>>> 2015-11-23 21:00 GMT+08:00 Saisai Shao <sai.sai.s...@gmail.com>:
>>>>>
>>>>>> Hi Tingwen,
>>>>>>
>>>>>> Would you minding sharing your changes in
>>>>>> ExecutorAllocationManager#addExecutors().
>>>>>>
>>>>>> From my understanding and test, dynamic allocation can be worked when
>>>>>> you set the min to max number of executors to the same number.
>>>>>>
>>>>>> Please check your Spark and Yarn log to make sure the executors are
>>>>>> correctly started, the warning log means currently resource is not enough
>>>>>> to submit tasks.
>>>>>>
>>>>>> Thanks
>>>>>> Saisai
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 23, 2015 at 8:41 PM, 谢廷稳 <xieting...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>> I ran a SparkPi on YARN with Dynamic Allocation enabled and set 
>>>>>>> spark.dynamicAllocation.maxExecutors
>>>>>>> equals
>>>>>>> spark.dynamicAllocation.minExecutors,then I submit an application
>>>>>>> using:
>>>>>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>>>>>> --master yarn-cluster --driver-memory 4g --executor-memory 8g
>>>>>>> lib/spark-examples*.jar 200
>>>>>>>
>>>>>>> then, this application was submitted successfully, but the AppMaster
>>>>>>> always saying “15/11/23 20:13:08 WARN cluster.YarnClusterScheduler:
>>>>>>> Initial job has not accepted any resources; check your cluster UI to 
>>>>>>> ensure
>>>>>>> that workers are registered and have sufficient resources”
>>>>>>> and when I open DEBUG,I found “15/11/23 20:24:00 DEBUG
>>>>>>> ExecutorAllocationManager: Not adding executors because our current 
>>>>>>> target
>>>>>>> total is already 50 (limit 50)” in the console.
>>>>>>>
>>>>>>> I have fixed it by modifying code in
>>>>>>> ExecutorAllocationManager.addExecutors,Does this a bug or it was 
>>>>>>> designed
>>>>>>> that we can’t set maxExecutors equals minExecutors?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Weber
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to