while creating binary distribution without native code and without documentation: i came across this issue.Can anyone suggest what is the workaround for this.

2012-01-23 Thread rajesh putta
Hi, while creating binary distribution without native code and without documentation: i came across this issue.Can anyone suggest what is the workaround for this. $ mvn package -Pdist -DskipTests -Dtar [mkdir] Created dir: /home/rajesh/Hadoop-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/t

Re: Issues while building hadoop trunk

2012-01-23 Thread rajesh putta
Hi Harsh, even after setting LD_LIBRARY_PATH variable still i am getting the same error. Thanks Rajesh Putta On Tue, Jan 24, 2012 at 1:11 AM, Harsh J wrote: > Hi, > > On Tue, Jan 24, 2012 at 1:00 AM, rajesh putta wrote: >> While building hadoop trunk i came acros

Issues while building hadoop trunk

2012-01-23 Thread rajesh putta
can resume the build with the command [ERROR] mvn -rf :hadoop-common Thanks Rajesh Putta

Issue while building hadoop-mapreduce-project in trunk

2011-11-11 Thread rajesh putta
ems, you can resume the build with the command [ERROR] mvn -rf :hadoop-mapreduce-client-core Thanks Rajesh Putta Development Engineer Pramati Technologies

Re: Error while building Hadoop-Yarn

2011-08-19 Thread rajesh putta
Thanks Arun, Now its working fine Thanks & Regards Rajesh Putta Development Engineer Pramati Technologies On Fri, Aug 19, 2011 at 12:25 PM, Arun Murthy wrote: > That means you don't have the autotool chain necessary for build the > native code. > > For now pass -

Error while building Hadoop-Yarn

2011-08-18 Thread rajesh putta
-yarn-server-nodemanager: autoreconf command returned an exit value != 0. Aborting build; see debug output for more information. -> [Help 1] Thanks in advance Thanks & Regards Rajesh Putta

Re: Can I use MapWritable as a key?

2011-07-19 Thread rajesh putta
o use SortedMapWritable as key. Thanks& Regards Rajesh Putta M Tech CSE IIIT-H On Wed, Jul 20, 2011 at 5:32 AM, Choonho Son wrote: > I am newbie. > > Most of example shows that, > job.setOutputKeyClass(Text.class); > > is it possible job.setOutputKeyClass(MapWritable.cl

Re: Too many fetch-failures

2011-07-19 Thread rajesh putta
Yes, we can set mapred.tasktracker.map.tasks.maximum for each node . Thanks & Regards Rajesh Putta M Tech CSE IIIT-H On Tue, Jul 19, 2011 at 6:36 PM, Mohamed Riadh Trad wrote: > Hi, > > I am running hadoop on a cluster with nodes having different > configurations. Is it

Re: Too many fetch-failures

2011-07-18 Thread rajesh putta
.java:187) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:227) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:149) Fix Or Solution: Make an entry of the hostnames whoever are in the cluster to /etc/hosts file. Thanks & Regards Rajesh Putta M Tech CSE IIIT-H On Tue, Jul 19, 2011 a