Hi,

I am not using emr. And yes I restarted several times.

On Wed, Mar 14, 2018 at 6:35 AM, Anthony, Olufemi <
olufemi.anth...@capitalone.com> wrote:

> After you updated your yarn-site.xml  file, did you restart the YARN
> resource manager ?
>
>
>
> https://aws.amazon.com/premiumsupport/knowledge-
> center/restart-service-emr/
>
>
>
> Femi
>
>
>
> *From: *kant kodali <kanth...@gmail.com>
> *Date: *Wednesday, March 14, 2018 at 6:16 AM
> *To: *Femi Anthony <femib...@gmail.com>
> *Cc: *vermanurag <anurag.ve...@fnmathlogic.com>, "user @spark" <
> user@spark.apache.org>
> *Subject: *Re: How to run spark shell using YARN
>
>
>
> 16GB RAM.  AWS m4.xlarge. It's a three node cluster and I only have YARN
> and  HDFS running. Resources are barely used however I believe there is
> something in my config that is preventing YARN to see that I have good
> amount of resources I think (thats my guess I never worked with YARN
> before). My mapred-site.xml is empty. Do I even need this? if so, what
> should I set it to?
>
>
>
> On Wed, Mar 14, 2018 at 2:46 AM, Femi Anthony <femib...@gmail.com> wrote:
>
> What's the hardware configuration of the box you're running on i.e. how
> much memory does it have ?
>
>
>
> Femi
>
>
>
> On Wed, Mar 14, 2018 at 5:32 AM, kant kodali <kanth...@gmail.com> wrote:
>
> Tried this
>
>
>
>  ./spark-shell --master yarn --deploy-mode client --executor-memory 4g
>
>
>
> Same issue. Keeps going forever..
>
>
>
> 18/03/14 09:31:25 INFO Client:
>
> client token: N/A
>
> diagnostics: N/A
>
> ApplicationMaster host: N/A
>
> ApplicationMaster RPC port: -1
>
> queue: default
>
> start time: 1521019884656
>
> final status: UNDEFINED
>
> tracking URL: http://ip-172-31-0-54:8088/proxy/application_
> 1521014458020_0004/
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__ip-2D172-2D31-2D0-2D54-3A8088_proxy_application-5F1521014458020-5F0004_&d=DwMFaQ&c=pLULRYW__RtkwsQUPxJVDGboCTdgji3AcHNJU0BpTJE&r=yGeUxkUZBNPLfjlLWOxq59qm8G85KrtO5kZzZS4Mb6Mram0KPWstdXkCzdil9aYa&m=oOFBWIVhH_T4NwkrNL0SyXQhhsGyETZUeedvYi0AVd0&s=Eq5RVldAnerufWd7pgeydUZWtdXr2XJoEncqgUV5McE&e=>
>
> user: centos
>
>
>
> 18/03/14 09:30:08 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:09 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:10 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:11 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:12 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:13 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:14 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
> 18/03/14 09:30:15 INFO Client: Application report for
> application_1521014458020_0003 (state: ACCEPTED)
>
>
>
> On Wed, Mar 14, 2018 at 2:03 AM, Femi Anthony <femib...@gmail.com> wrote:
>
> Make sure you have enough memory allocated for Spark workers, try
> specifying executor memory as follows:
>
> --executor-memory <memory>
>
> to spark-submit.
>
>
>
> On Wed, Mar 14, 2018 at 3:25 AM, kant kodali <kanth...@gmail.com> wrote:
>
> I am using spark 2.3.0 and hadoop 2.7.3.
>
>
>
> Also I have done the following and restarted all. But I still
> see ACCEPTED: waiting for AM container to be allocated, launched and
> register with RM. And i am unable to spawn spark-shell.
>
>
>
> editing $HADOOP_HOME/etc/hadoop/capacity-scheduler.xml and change the
> following property value from 0.1 to something higher. I changed to 0.5
> (50%)
>
> <property>
>
>     <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
>
>     <value>0.5</value>
>
>     <description>
>
>         Maximum percent of resources in the cluster which can be used to run 
> application masters i.e. controls number of concurrent running applications.
>
>     </description>
>
> </property>
>
> You may have to allocate more memory to YARN by editing yarn-site.xml by
> updating the following property:
>
> <property>
>
>     <name>yarn.nodemanager.resource.memory-mb</name>
>
>     <value>8192</value>
>
> </property>
>
> https://stackoverflow.com/questions/45687607/waiting-
> for-am-container-to-be-allocated-launched-and-register-with-rm
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_questions_45687607_waiting-2Dfor-2Dam-2Dcontainer-2Dto-2Dbe-2Dallocated-2Dlaunched-2Dand-2Dregister-2Dwith-2Drm&d=DwMFaQ&c=pLULRYW__RtkwsQUPxJVDGboCTdgji3AcHNJU0BpTJE&r=yGeUxkUZBNPLfjlLWOxq59qm8G85KrtO5kZzZS4Mb6Mram0KPWstdXkCzdil9aYa&m=oOFBWIVhH_T4NwkrNL0SyXQhhsGyETZUeedvYi0AVd0&s=i8R5_ASmKyL_OccyAC0AtMDz7VWncp0UO27XuXBnfXs&e=>
>
>
>
>
>
>
>
> On Wed, Mar 14, 2018 at 12:12 AM, kant kodali <kanth...@gmail.com> wrote:
>
> any idea?
>
>
>
> On Wed, Mar 14, 2018 at 12:12 AM, kant kodali <kanth...@gmail.com> wrote:
>
> I set core-site.xml, hdfs-site.xml, yarn-site.xml  as per this website
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__dwbi.org_etl_bigdata_183-2Dsetup-2Dhadoop-2Dcluster&d=DwMFaQ&c=pLULRYW__RtkwsQUPxJVDGboCTdgji3AcHNJU0BpTJE&r=yGeUxkUZBNPLfjlLWOxq59qm8G85KrtO5kZzZS4Mb6Mram0KPWstdXkCzdil9aYa&m=oOFBWIVhH_T4NwkrNL0SyXQhhsGyETZUeedvYi0AVd0&s=hHisUoN8gj2IyS6_ZjQUvCOHUzfLc5jAAFftyskPWag&e=>
> and these are the only three files I changed Do I need to set or change
> anything in mapred-site.xml (As of now I have not
> touched mapred-site.xml)?
>
>
>
> when I do yarn -node -list -all I can see both node manager and resource
> managers are running fine.
>
>
>
> But when I run spark-shell --master yarn --deploy-mode client
>
>
>
>
>
> it just keeps looping forever and never stops with the following messages
>
>
>
> 18/03/14 07:07:47 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
>
> 18/03/14 07:07:48 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
>
> 18/03/14 07:07:49 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
>
> 18/03/14 07:07:50 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
>
> 18/03/14 07:07:51 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
>
> 18/03/14 07:07:52 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
>
>
>
> when I go to RM UI I see this
>
>
>
> ACCEPTED: waiting for AM container to be allocated, launched and register
> with RM.
>
>
>
>
>
>
>
>
>
> On Mon, Mar 12, 2018 at 7:16 PM, vermanurag <anurag.ve...@fnmathlogic.com>
> wrote:
>
> This does not look like Spark error. Looks like yarn has not been able to
> allocate resources for spark driver. If you check resource manager UI you
> are likely to see this as spark application waiting for resources. Try
> reducing the driver node memory and/ or other bottlenecks based on what you
> see in the resource manager UI.
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dspark-2Duser-2Dlist.1001560.n3.nabble.com_&d=DwMFaQ&c=pLULRYW__RtkwsQUPxJVDGboCTdgji3AcHNJU0BpTJE&r=yGeUxkUZBNPLfjlLWOxq59qm8G85KrtO5kZzZS4Mb6Mram0KPWstdXkCzdil9aYa&m=oOFBWIVhH_T4NwkrNL0SyXQhhsGyETZUeedvYi0AVd0&s=g7By6jrvjF4WMSbJXbFkISCgGC7y3KhCmQjGov1Op60&e=>
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
>
>
>
>
>
>
>
>
>
> --
>
> http://www.femibyte.com/twiki5/bin/view/Tech/
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.femibyte.com_twiki5_bin_view_Tech_&d=DwMFaQ&c=pLULRYW__RtkwsQUPxJVDGboCTdgji3AcHNJU0BpTJE&r=yGeUxkUZBNPLfjlLWOxq59qm8G85KrtO5kZzZS4Mb6Mram0KPWstdXkCzdil9aYa&m=oOFBWIVhH_T4NwkrNL0SyXQhhsGyETZUeedvYi0AVd0&s=VNsO5HYu4dkggm7IKXxlyliIsY7ruyup_KdC2JtoLAQ&e=>
>
> http://www.nextmatrix.com
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.nextmatrix.com&d=DwMFaQ&c=pLULRYW__RtkwsQUPxJVDGboCTdgji3AcHNJU0BpTJE&r=yGeUxkUZBNPLfjlLWOxq59qm8G85KrtO5kZzZS4Mb6Mram0KPWstdXkCzdil9aYa&m=oOFBWIVhH_T4NwkrNL0SyXQhhsGyETZUeedvYi0AVd0&s=3QEiubsB-BsOFzspftggKQwytEpZpI4b34Jc4qhrVSE&e=>
>
> "Great spirits have always encountered violent opposition from mediocre
> minds." - Albert Einstein.
>
>
>
>
>
>
>
> --
>
> http://www.femibyte.com/twiki5/bin/view/Tech/
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.femibyte.com_twiki5_bin_view_Tech_&d=DwMFaQ&c=pLULRYW__RtkwsQUPxJVDGboCTdgji3AcHNJU0BpTJE&r=yGeUxkUZBNPLfjlLWOxq59qm8G85KrtO5kZzZS4Mb6Mram0KPWstdXkCzdil9aYa&m=oOFBWIVhH_T4NwkrNL0SyXQhhsGyETZUeedvYi0AVd0&s=VNsO5HYu4dkggm7IKXxlyliIsY7ruyup_KdC2JtoLAQ&e=>
>
> http://www.nextmatrix.com
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.nextmatrix.com&d=DwMFaQ&c=pLULRYW__RtkwsQUPxJVDGboCTdgji3AcHNJU0BpTJE&r=yGeUxkUZBNPLfjlLWOxq59qm8G85KrtO5kZzZS4Mb6Mram0KPWstdXkCzdil9aYa&m=oOFBWIVhH_T4NwkrNL0SyXQhhsGyETZUeedvYi0AVd0&s=3QEiubsB-BsOFzspftggKQwytEpZpI4b34Jc4qhrVSE&e=>
>
> "Great spirits have always encountered violent opposition from mediocre
> minds." - Albert Einstein.
>
>
>
> ------------------------------
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>

Reply via email to