Re: Too-many fetch failure Reduce Error

2011-01-11 Thread Adarsh Sharma
Any update on this error. Thanks Adarsh Sharma wrote: Esteban Gutierrez Moguel wrote: Adarsh, Dou you have in /etc/hosts the hostnames for masters and slaves? Yes I know this issue. But did you think the error occurs while reading the output of map. I want to know the proper reason

Re: Application for testing

2011-01-11 Thread Konstantin Boudnik
(Moving general@ to Bcc: list) Bo, you can try to run TeraSort from Hadoop examples: you'll see if the cluster is up and running and cen compare its performance between upgrades, if needed. Also, please don't use general@ for user questions: there's common-user@ list exactly for these purposes.

Re: No locks available

2011-01-11 Thread Adarsh Sharma
Allen Wittenauer wrote: On Jan 11, 2011, at 2:39 AM, Adarsh Sharma wrote: Dear all, Yesterday I was working on a cluster of 6 Hadoop nodes ( Load data, perform some jobs ). But today when I start my cluster I came across a problem on one of my datanodes. Are you running thi

Re: When applying a patch, which attachment should I use?

2011-01-11 Thread edward choi
I am not familiar with this whole svn and patch stuff, so please understand my asking. I was going to apply hdfs-630-0.20-append.patch only because I wanted to install HBase and the installation guide told me to.

Re: No locks available

2011-01-11 Thread Allen Wittenauer
On Jan 11, 2011, at 2:39 AM, Adarsh Sharma wrote: > Dear all, > > Yesterday I was working on a cluster of 6 Hadoop nodes ( Load data, perform > some jobs ). But today when I start my cluster I came across a problem on one > of my datanodes. Are you running this on NFS? > > 2011-01-1

Re: libjars options

2011-01-11 Thread C.V.Krishnakumar Iyer
Hi, Thanks a lot Alex! using GenericOptionsParser solved the issue. Previously I had used Tool and had assumed that it would take care of this. Regards, Krishna. On Jan 11, 2011, at 12:48 PM, Alex Kozlov wrote: > There is also a blog that I recently wrote, if it helps > http://www.cloudera.com/

Re: libjars options

2011-01-11 Thread C.V.Krishnakumar Iyer
Hi, Thanks a lot! I shall try this once and let you know! Regards, Krishna. On Jan 11, 2011, at 12:48 PM, Alex Kozlov wrote: > There is also a blog that I recently wrote, if it helps > http://www.cloudera.com/blog/2011/01/how-to-include-third-party-libraries-in-your-map-reduce-job > > On Tue,

Re: libjars options

2011-01-11 Thread Alex Kozlov
There is also a blog that I recently wrote, if it helps http://www.cloudera.com/blog/2011/01/how-to-include-third-party-libraries-in-your-map-reduce-job On Tue, Jan 11, 2011 at 12:33 PM, Alex Kozlov wrote: > Have you implemented GenericOptionsParser? Do you see your jar in the * > mapred.cache.

Re: libjars options

2011-01-11 Thread Alex Kozlov
Have you implemented GenericOptionsParser? Do you see your jar in the * mapred.cache.files* or *tmpjars* parameter in your job.xml file (can view via a JT Web UI)? -- Alex Kozlov Solutions Architect Cloudera, Inc twitter: alexvk2009

Re: TeraSort question.

2011-01-11 Thread Raj V
Can't attach teh pdf file that shows diffeent maps., File is too big, From: Niels Basjes To: common-user@hadoop.apache.org; Raj V Cc: Sent: Tuesday, January 11, 2011 11:07:08 AM Subject: Re: TeraSort question. Raj, Have a look at the graph shown here: http://cs.smith.edu/dftwiki/index.php/H

Re: libjars options

2011-01-11 Thread C.V.Krishnakumar Iyer
Hi, I have tried that as well, using -files But it still gives the exact same error. Any other thing that I could try? Thanks, Krishna. On Jan 11, 2011, at 10:23 AM, Ted Yu wrote: > Refer to Alex Kozlov's answer on 12/11/10 > > On Tue, Jan 11, 2011 at 10:10 AM, C.V.Krishnakumar Iyer > wrote

Re: TeraSort question.

2011-01-11 Thread Niels Basjes
Raj, Have a look at the graph shown here: http://cs.smith.edu/dftwiki/index.php/Hadoop_Tutorial_1.1_--_Generating_Task_Timelines It should make clear that the number of tasks varies greatly over the lifetime of a job. Depending on the nodes available this may leave node idle. Niels 2011/1/11 Ra

Re: libjars options

2011-01-11 Thread Ted Yu
Refer to Alex Kozlov's answer on 12/11/10 On Tue, Jan 11, 2011 at 10:10 AM, C.V.Krishnakumar Iyer wrote: > Hi, > > Could anyone please guide me as to how to use the -libjars option in HDFS? > > I have added the necessary jar file (the hbase jar - to be precise) to the > classpath of the node whe

libjars options

2011-01-11 Thread C.V.Krishnakumar Iyer
Hi, Could anyone please guide me as to how to use the -libjars option in HDFS? I have added the necessary jar file (the hbase jar - to be precise) to the classpath of the node where I am starting the job. The following is the format that i am invoking: bin/hadoop jar -libjars bin/hadoo

Re: TeraSort question.

2011-01-11 Thread Raj V
Ted Thanks. I have all the graphs I need that include, map reduce timeline, system activity for all the nodes when the sort was running. I will publish them once I have them in some presentable format., For legal reasons, I really don't want to send the complete job histiory files. My questio

Re: When applying a patch, which attachment should I use?

2011-01-11 Thread Ted Dunning
You may also be interested in the append branch: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/ On Tue, Jan 11, 2011 at 3:12 AM, edward choi wrote: > Thanks for the info. > I am currently using Hadoop 0.20.2, so I guess I only need apply > hdfs-630-0.20-append.patch< >

Re: TeraSort question.

2011-01-11 Thread Ted Dunning
Raj, Do you have the job history files? That would be very useful. I would be happy to create some swimlane and related graphs for you if you can send me the history files. On Mon, Jan 10, 2011 at 9:06 PM, Raj V wrote: > All, > > I have been running terasort on a 480 node hadoop cluster. I ha

Re: TeraSort question.

2011-01-11 Thread Raj V
I used 9500 maps. The number of maps defaulty to 2 for teragen. For terasort,  it would depend on the number of input files, the dfs.block.size and number of nodes.   Raj From: Phil Whelan To: common-user@hadoop.apache.org; Raj V Cc: Sent: Monday, January 10, 2011 10:39:29 PM Subject: Re: T

Re: When applying a patch, which attachment should I use?

2011-01-11 Thread edward choi
Thanks for the info. I am currently using Hadoop 0.20.2, so I guess I only need apply hdfs-630-0.20-append.patch . I wasn't familiar with the term "trunk". I guess it means "the latest development". Thanks again.

No locks available

2011-01-11 Thread Adarsh Sharma
Dear all, Yesterday I was working on a cluster of 6 Hadoop nodes ( Load data, perform some jobs ). But today when I start my cluster I came across a problem on one of my datanodes. Datanodes fails to start due to following error :- 2011-01-11 12:54:10,367 INFO org.apache.hadoop.hdfs.server