Re: Ant BuildException error building Hadoop 2.2.0

2013-12-04 Thread Silvina Caíno Lores
Hi again, I've tried to build using JDK 1.6.0_38 and I'm still getting the same exception: ~/hadoop-2.2.0-maven$ java -version java version "1.6.0_38-ea" Java(TM) SE Runtime Environment (build 1.6.0_38-ea-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.13-b02, mixed mode) -- [ERROR] Failed

Re: get error in running terasort tool

2013-12-04 Thread Jitendra Yadav
can you check how many healthy data nodes available in your cluster? Use: #hadoop dfsadmin -report Regards Jitendra On Thu, Dec 5, 2013 at 12:48 PM, ch huang wrote: > hi,maillist: > i try run terasort in my cluster ,but failed ,following > is error ,i do not know why, anyon

Re: get error in running terasort tool

2013-12-04 Thread ch huang
BTW.i use CDH4.4 On Thu, Dec 5, 2013 at 3:18 PM, ch huang wrote: > hi,maillist: > i try run terasort in my cluster ,but failed ,following > is error ,i do not know why, anyone can help? > > # hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar > terasort /alex/te

get error in running terasort tool

2013-12-04 Thread ch huang
hi,maillist: i try run terasort in my cluster ,but failed ,following is error ,i do not know why, anyone can help? # hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar terasort /alex/terasort/1G-input /alex/terasort/1G-output 13/12/05 15:15:43 INFO terasort.TeraSo

Re: Implementing and running an applicationmaster

2013-12-04 Thread Yue Wang
Hi, I took a look at the codes and found some examples on the web. One example is: http://wiki.opf-labs.org/display/SP/Resource+management It seems that users can run simple shell commands using Client of YARN. But when it comes to a practical MapReduce example like WordCount, people still run co

Re: issue about the MR JOB local dir

2013-12-04 Thread ch huang
thank you ,but it seems the doc is littler old , doc says - *PUBLIC:* /filecache - *PRIVATE:* /usercache//filecache - *APPLICATION:* /usercache//appcache// but here is my nodemanager directory,i guess nmPrivate belongs to private dir ,and filecache dir is not exist in usercache # ls /d

Re: issue about capacity scheduler

2013-12-04 Thread Vinod Kumar Vavilapalli
If both the jobs in the MR queue are from the same user, CapacityScheduler will only try to run them one after another. If possible, run them as different users. At which point, you will see sharing across jobs because they are from different users. Thanks, +Vinod On Dec 4, 2013, at 1:33 AM,

Re: issue about the MR JOB local dir

2013-12-04 Thread Vinod Kumar Vavilapalli
These are the directories where NodeManager (as configured) will store its local files. Local files includes scripts, jars, libraries - all files sent to nodes via DistributedCache. Thanks, +Vinod On Dec 3, 2013, at 5:26 PM, ch huang wrote: > hi,maillist: > i see three dirs on my l

Re: Client mapred tries to renew a token with renewer specified as nobody

2013-12-04 Thread Vinod Kumar Vavilapalli
It is clearly mentioning that the renewer is wrong (renewer marked is 'nobody' but mapred is trying to renew the token), you may want to check this. Thanks, +Vinod On Dec 2, 2013, at 8:25 AM, Rainer Toebbicke wrote: > 2013-12-02 15:57:08,541 ERROR > org.apache.hadoop.security.UserGroupInforma

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memo

2013-12-04 Thread YouPeng Yang
Hi please reference to http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/ 2013/12/5 panfei > we have already tried several values of these two parameters, but it seems > no use. > > > 2013/12/5 Tsuyoshi OZAWA > >> Hi, >> >> Please check the properties like mapreduce.redu

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memo

2013-12-04 Thread panfei
we have already tried several values of these two parameters, but it seems no use. 2013/12/5 Tsuyoshi OZAWA > Hi, > > Please check the properties like mapreduce.reduce.memory.mb and > mapredce.map.memory.mb in mapred-site.xml. These properties decide > resource limits for mappers/reducers. > >

Re: issue about capacity scheduler

2013-12-04 Thread ch huang
another question is ,i set the yarn.scheduler.minimum-allocation-mb is 2GB,so the container size will at less 2GB ,but i see appMaster container only use 1GB heap size why? # ps -ef|grep 8062 yarn 8062 8047 5 09:04 ?00:00:09 /usr/java/jdk1.7.0_25/bin/java -Dlog4j.configuration=conta

Re: issue about capacity scheduler

2013-12-04 Thread ch huang
if i have 40GB memory of cluster resource, and "yarn.scheduler.capacity.maximum-am-resource-percent" set to 0.1 ,so that's mean when i lauch a appMaster ,i need allocate 4GB to the appMaster ? ,if so, why i increasing the value will cause more appMaster running concurrently,instead of decreasing ?

Check compression codec of an HDFS file

2013-12-04 Thread alex bohr
What's the best way to check the compression codec that an HDFS file was written with? We use both Gzip and Snappy compression so I want a way to determine how a specific file is compressed. The closest I found is the *getCodec

Re: issue about the MR JOB local dir

2013-12-04 Thread Jian He
The following links may help you http://hortonworks.com/blog/management-of-application-dependencies-in-yarn/ http://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/ Thanks, Jian On Tue, Dec 3, 2013 at 5:26 PM, ch huang wrote: > hi,maillist: > i see three dirs on my loc

Re: issue about capacity scheduler

2013-12-04 Thread Jian He
you can probably try increasing "yarn.scheduler.capacity.maximum-am-resource-percent", This controls the max concurrently running AMs. Thanks, Jian On Wed, Dec 4, 2013 at 1:33 AM, ch huang wrote: > hi,maillist : > i use yarn framework and capacity scheduler ,and i have > two

RE: Ant BuildException error building Hadoop 2.2.0

2013-12-04 Thread java8964
Can you try JDK 1.6? I just did a Hadoop 2.2.0 GA release build myself days ago. From my experience, JDK 1.7 not work for me. Yong Date: Wed, 4 Dec 2013 19:55:16 +0100 Subject: Re: Ant BuildException error building Hadoop 2.2.0 From: silvi.ca...@gmail.com To: user@hadoop.apache.org Hi, It seems

RE: Ant BuildException error building Hadoop 2.2.0

2013-12-04 Thread java8964
Do you have 'cmake' in your environment? Yong Date: Wed, 4 Dec 2013 17:20:03 +0100 Subject: Ant BuildException error building Hadoop 2.2.0 From: silvi.ca...@gmail.com To: user@hadoop.apache.org Hello everyone, I've been having trouble to build Hadoop 2.2.0 using Maven 3.1.1, this is part of th

Re: Ant BuildException error building Hadoop 2.2.0

2013-12-04 Thread Silvina Caíno Lores
Hi, It seems I do: ~/hadoop-2.2.0-maven$ cmake --version cmake version 2.8.2 On 4 December 2013 19:51, java8964 wrote: > Do you have 'cmake' in your environment? > > Yong > > -- > Date: Wed, 4 Dec 2013 17:20:03 +0100 > Subject: Ant BuildException error building Ha

Re: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memo

2013-12-04 Thread Tsuyoshi OZAWA
Hi, Please check the properties like mapreduce.reduce.memory.mb and mapredce.map.memory.mb in mapred-site.xml. These properties decide resource limits for mappers/reducers. On Wed, Dec 4, 2013 at 10:16 PM, panfei wrote: > > > -- Forwarded message -- > From: panfei > Date: 2013/1

How to add aspects to hadoop 2.2

2013-12-04 Thread Black, James A.
Hello, I asked this question here: http://stackoverflow.com/questions/20330549/how-to-add-aspects-to-hadoop-2-2, but haven't gotten any help so I thought I would ask here. Basically, I would like to be able to see all the methods called when I do particular operations, as I will then need t

Ant BuildException error building Hadoop 2.2.0

2013-12-04 Thread Silvina Caíno Lores
Hello everyone, I've been having trouble to build Hadoop 2.2.0 using Maven 3.1.1, this is part of the output I get (full log at http://pastebin.com/FE6vu46M): [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main .

Fwd: Container [pid=22885,containerID=container_1386156666044_0001_01_000013] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual mem

2013-12-04 Thread panfei
-- Forwarded message -- From: panfei Date: 2013/12/4 Subject: Container [pid=22885,containerID=container_138615044_0001_01_13] is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 332.5 GB of 8 GB virtual memory used. Killing contain

issue about capacity scheduler

2013-12-04 Thread ch huang
hi,maillist : i use yarn framework and capacity scheduler ,and i have two queue ,one for hive and the other for big MR job in hive queue it's work fine,because hive task is very faster ,but what i think is user A submitted two big MR job ,so first big job eat all the resource bel

Aw: mapreduce.jobtracker.expire.trackers.interval no effect

2013-12-04 Thread Hansi Klose
Hi. i think i found the reason. I looked at the job.xml and found the parameter mapred.tasktracker.expiry.interval 600 and mapreduce.jobtracker.expire.trackers.interval 3 So i tried the deprecated parameter mapred.tasktracker.expiry.interval in my configuration and voila it works! W

RE: about hadoop-2.2.0 "mapred.child.java.opts"

2013-12-04 Thread Henry Hung
@Harsh J Thank you, I intend to upgrade from Hadoop 1.0.4 and this kind of information is very helpful. Best regards, Henry -Original Message- From: Harsh J [mailto:ha...@cloudera.com] Sent: Wednesday, December 04, 2013 4:20 PM To: Subject: Re: about hadoop-2.2.0 "mapred.child.java.opt

Re: Client mapred tries to renew a token with renewer specified as nobody

2013-12-04 Thread Rainer Toebbicke
Well, that does not seem to be the issue. The Kerberos ticket gets refreshed automatically, but the delegation token doesn't. Le 3 déc. 2013 à 20:24, Raviteja Chirala a écrit : > Alternatively you can schedule a cron job to do kinit every 20 hours or so. > Just to renew token before it expires

Aw: Re: mapreduce.jobtracker.expire.trackers.interval no effect

2013-12-04 Thread Hansi Klose
Hi adam,   in our enviroment it does not matter what i insert there it always take over 600 seconds. I tried 3 and the resulte was the same.   Regards Hansi   Gesendet: Dienstag, 03. Dezember 2013 um 19:23 Uhr Von: "Adam Kawa" An: user@hadoop.apache.org Betreff: Re: mapreduce.jobtracker.expi

Re: about hadoop-2.2.0 "mapred.child.java.opts"

2013-12-04 Thread Harsh J
Actually, its the other way around (thanks Sandy for catching this error in my post). The presence of mapreduce.map|reduce.java.opts overrides mapred.child.java.opts, not the other way round as I had stated earlier (below). On Wed, Dec 4, 2013 at 1:28 PM, Harsh J wrote: > Yes but the old property