What's the hardware configuration of the box you're running on i.e. how
much memory does it have ?

Femi

On Wed, Mar 14, 2018 at 5:32 AM, kant kodali <kanth...@gmail.com> wrote:

> Tried this
>
>  ./spark-shell --master yarn --deploy-mode client --executor-memory 4g
>
>
> Same issue. Keeps going forever..
>
>
> 18/03/14 09:31:25 INFO Client:
>
> client token: N/A
>
> diagnostics: N/A
>
> ApplicationMaster host: N/A
>
> ApplicationMaster RPC port: -1
>
> queue: default
>
> start time: 1521019884656
>
> final status: UNDEFINED
>
> tracking URL: http://ip-172-31-0-54:8088/proxy/application_
> 1521014458020_0004/
>
> user: centos
>
>
> 18/03/14 09:30:08 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:09 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:10 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:11 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:12 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:13 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:14 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:15 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> On Wed, Mar 14, 2018 at 2:03 AM, Femi Anthony <femib...@gmail.com> wrote:
>
>> Make sure you have enough memory allocated for Spark workers, try
>> specifying executor memory as follows:
>>
>> --executor-memory <memory>
>>
>> to spark-submit.
>>
>> On Wed, Mar 14, 2018 at 3:25 AM, kant kodali <kanth...@gmail.com> wrote:
>>
>>> I am using spark 2.3.0 and hadoop 2.7.3.
>>>
>>> Also I have done the following and restarted all. But I still
>>> see ACCEPTED: waiting for AM container to be allocated, launched and
>>> register with RM. And i am unable to spawn spark-shell.
>>>
>>> editing $HADOOP_HOME/etc/hadoop/capacity-scheduler.xml and change the
>>> following property value from 0.1 to something higher. I changed to 0.5
>>> (50%)
>>>
>>> <property>
>>>     <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
>>>     <value>0.5</value>
>>>     <description>
>>>         Maximum percent of resources in the cluster which can be used to 
>>> run application masters i.e. controls number of concurrent running 
>>> applications.
>>>     </description>
>>> </property>
>>>
>>> You may have to allocate more memory to YARN by editing yarn-site.xml by
>>> updating the following property:
>>>
>>> <property>
>>>     <name>yarn.nodemanager.resource.memory-mb</name>
>>>     <value>8192</value>
>>> </property>
>>>
>>> https://stackoverflow.com/questions/45687607/waiting-for-am-
>>> container-to-be-allocated-launched-and-register-with-rm
>>>
>>>
>>>
>>> On Wed, Mar 14, 2018 at 12:12 AM, kant kodali <kanth...@gmail.com>
>>> wrote:
>>>
>>>> any idea?
>>>>
>>>> On Wed, Mar 14, 2018 at 12:12 AM, kant kodali <kanth...@gmail.com>
>>>> wrote:
>>>>
>>>>> I set core-site.xml, hdfs-site.xml, yarn-site.xml  as per this website
>>>>> <https://dwbi.org/etl/bigdata/183-setup-hadoop-cluster> and these are
>>>>> the only three files I changed Do I need to set or change anything in
>>>>> mapred-site.xml (As of now I have not touched mapred-site.xml)?
>>>>>
>>>>> when I do yarn -node -list -all I can see both node manager and
>>>>> resource managers are running fine.
>>>>>
>>>>> But when I run spark-shell --master yarn --deploy-mode client
>>>>>
>>>>>
>>>>> it just keeps looping forever and never stops with the following
>>>>> messages
>>>>>
>>>>> 18/03/14 07:07:47 INFO Client: Application report for
>>>>> application_1521011212550_0001 (state: ACCEPTED)
>>>>> 18/03/14 07:07:48 INFO Client: Application report for
>>>>> application_1521011212550_0001 (state: ACCEPTED)
>>>>> 18/03/14 07:07:49 INFO Client: Application report for
>>>>> application_1521011212550_0001 (state: ACCEPTED)
>>>>> 18/03/14 07:07:50 INFO Client: Application report for
>>>>> application_1521011212550_0001 (state: ACCEPTED)
>>>>> 18/03/14 07:07:51 INFO Client: Application report for
>>>>> application_1521011212550_0001 (state: ACCEPTED)
>>>>> 18/03/14 07:07:52 INFO Client: Application report for
>>>>> application_1521011212550_0001 (state: ACCEPTED)
>>>>>
>>>>> when I go to RM UI I see this
>>>>>
>>>>> ACCEPTED: waiting for AM container to be allocated, launched and
>>>>> register with RM.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Mar 12, 2018 at 7:16 PM, vermanurag <
>>>>> anurag.ve...@fnmathlogic.com> wrote:
>>>>>
>>>>>> This does not look like Spark error. Looks like yarn has not been
>>>>>> able to
>>>>>> allocate resources for spark driver. If you check resource manager UI
>>>>>> you
>>>>>> are likely to see this as spark application waiting for resources. Try
>>>>>> reducing the driver node memory and/ or other bottlenecks based on
>>>>>> what you
>>>>>> see in the resource manager UI.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> http://www.femibyte.com/twiki5/bin/view/Tech/
>> http://www.nextmatrix.com
>> "Great spirits have always encountered violent opposition from mediocre
>> minds." - Albert Einstein.
>>
>
>


-- 
http://www.femibyte.com/twiki5/bin/view/Tech/
http://www.nextmatrix.com
"Great spirits have always encountered violent opposition from mediocre
minds." - Albert Einstein.

Reply via email to