ks.com]
> *Sent:* 13 May 2015 05:00
> *To:* user@hadoop.apache.org
> *Subject:* Re: Lost mapreduce applications displayed in UI
>
>
>
> Maybe you have hit the completed app limit (1 by default). Once the
> limit hits, the oldest completed app will be removed from cache.
&g
Hi,
My cluster suddenly stopped displaying application information in UI (
http://localhost:8088/cluster/apps). Although the counters like 'Apps
Submitted' , 'Apps Completed', 'Apps Running' etc, all seems to increment
accurately and display right information, whnever I start new mapreduce job.
Hi,
When I submit a job to yarn ResourceManager the job is successful, and
eventhe Apps submitted, Apps Running, Apps Completed counters increment on
the ui http://localhost:8088/cluster/apps, but applications are not seen on
the same UI http://localhost:8088/cluster/apps ?
Could some one point ou
Hi,
I downloaded the latest release oozie-4.0.1 . When I try to build it
locally using
bin/mkdistro.sh
I get following error,
Error resolving version for plugin
'com.atlassian.maven.plugins:maven-clover2-plugin' from the repositories
[local (/home/cmx/.m2/repository), repository.cloudera.com (
; in a cloud of smoke,thoroughly used up, totally worn out, and loudly
> proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
> (+1) 415.501.0198 <%28%2B1%29%20415.501.0198>London (+44) (0) 20 8144 9872
> <%28%2B44%29%20%280%29%2020%208144%209872>*
>
&g
Hi,
We have yarn.nodemanager.local-dirs set to
/var/lib/hadoop/tmp/nm-local-dir. This is the directory where the mapreduce
jobs store temporary data. On restart of nodemanager, the contents of the
directory are deleted. I see the following definitions for
yarn.nodemanager.localizer.cache.target-
anager?
>
>
>
> I suggest you that whenever there is problem like getting stuck, take a
> thread dump using *jstack , *this would help analyzing issue faster.
>
>
>
> Any free ports i.e 1024<=x<=65365 should work fine.
>
>
>
> Thanks & Regards
>
Hi,
We have a resource manager with 4 node managers. Upon submitting the
mapreduce job to the resource manager, it is getting stuck while at
getResources() for 10 min, timing out and then it is trying other node
manager.
When only one nodemanager running, everything is fine. Upon turning off the
Hi,
I have 6 node cluster, and the scenario is as follows :-
I have one map reduce job which will write file1 in HDFS.
I have another map reduce job which will write file2 in HDFS.
In the third map reduce job I need to use file1 and file2 to do some
computation and output the value.
What is the
Hi,
I am very new to YARN.
I have a setup where ResourceManager is running on one cluster and I have 2
NodeManager running on other clusters.
Could someone point me to setting required to point my ResourceManager to 2
of my NodeManager ?
Thanks,
Hitarth
10 matches
Mail list logo