limit (1 by default). Once the
limit hits, the oldest completed app will be removed from cache.
- Zhijie
--
*From:* hitarth trivedi t.hita...@gmail.com
*Sent:* Tuesday, May 12, 2015 3:32 PM
*To:* user@hadoop.apache.org
*Subject:* Lost mapreduce
Hi,
My cluster suddenly stopped displaying application information in UI (
http://localhost:8088/cluster/apps). Although the counters like 'Apps
Submitted' , 'Apps Completed', 'Apps Running' etc, all seems to increment
accurately and display right information, whnever I start new mapreduce job.
Hi,
When I submit a job to yarn ResourceManager the job is successful, and
eventhe Apps submitted, Apps Running, Apps Completed counters increment on
the ui http://localhost:8088/cluster/apps, but applications are not seen on
the same UI http://localhost:8088/cluster/apps ?
Could some one point
Hi,
I downloaded the latest release oozie-4.0.1 . When I try to build it
locally using
bin/mkdistro.sh
I get following error,
Error resolving version for plugin
'com.atlassian.maven.plugins:maven-clover2-plugin' from the repositories
[local (/home/cmx/.m2/repository), repository.cloudera.com (
, and loudly
proclaiming “Wow! What a Ride!” - Hunter ThompsonDaemeon C.M. ReiydelleUSA
(+1) 415.501.0198 %28%2B1%29%20415.501.0198London (+44) (0) 20 8144 9872
%28%2B44%29%20%280%29%2020%208144%209872*
On Tue, Jan 27, 2015 at 3:46 PM, hitarth trivedi t.hita...@gmail.com
wrote:
Hi,
We have
Hi,
We have yarn.nodemanager.local-dirs set to
/var/lib/hadoop/tmp/nm-local-dir. This is the directory where the mapreduce
jobs store temporary data. On restart of nodemanager, the contents of the
directory are deleted. I see the following definitions for
dump using *jstack pid, *this would help analyzing issue faster.
Any free ports i.e 1024=x=65365 should work fine.
Thanks Regards
Rohith Sharma K S
*From:* hitarth trivedi [mailto:t.hita...@gmail.com]
*Sent:* 12 January 2015 07:01
*To:* user@hadoop.apache.org
*Subject:* node
Hi,
We have a resource manager with 4 node managers. Upon submitting the
mapreduce job to the resource manager, it is getting stuck while at
getResources() for 10 min, timing out and then it is trying other node
manager.
When only one nodemanager running, everything is fine. Upon turning off
Hi,
I am very new to YARN.
I have a setup where ResourceManager is running on one cluster and I have 2
NodeManager running on other clusters.
Could someone point me to setting required to point my ResourceManager to 2
of my NodeManager ?
Thanks,
Hitarth
Hi,
I have 6 node cluster, and the scenario is as follows :-
I have one map reduce job which will write file1 in HDFS.
I have another map reduce job which will write file2 in HDFS.
In the third map reduce job I need to use file1 and file2 to do some
computation and output the value.
What is
10 matches
Mail list logo