any idea?

On Wed, Mar 14, 2018 at 12:12 AM, kant kodali <kanth...@gmail.com> wrote:

> I set core-site.xml, hdfs-site.xml, yarn-site.xml  as per this website
> <https://dwbi.org/etl/bigdata/183-setup-hadoop-cluster> and these are the
> only three files I changed Do I need to set or change anything in
> mapred-site.xml (As of now I have not touched mapred-site.xml)?
>
> when I do yarn -node -list -all I can see both node manager and resource
> managers are running fine.
>
> But when I run spark-shell --master yarn --deploy-mode client
>
>
> it just keeps looping forever and never stops with the following messages
>
> 18/03/14 07:07:47 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
> 18/03/14 07:07:48 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
> 18/03/14 07:07:49 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
> 18/03/14 07:07:50 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
> 18/03/14 07:07:51 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
> 18/03/14 07:07:52 INFO Client: Application report for
> application_1521011212550_0001 (state: ACCEPTED)
>
> when I go to RM UI I see this
>
> ACCEPTED: waiting for AM container to be allocated, launched and register
> with RM.
>
>
>
>
> On Mon, Mar 12, 2018 at 7:16 PM, vermanurag <anurag.ve...@fnmathlogic.com>
> wrote:
>
>> This does not look like Spark error. Looks like yarn has not been able to
>> allocate resources for spark driver. If you check resource manager UI you
>> are likely to see this as spark application waiting for resources. Try
>> reducing the driver node memory and/ or other bottlenecks based on what
>> you
>> see in the resource manager UI.
>>
>>
>>
>> --
>> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>
>>
>

Reply via email to