Hadoop is not working after adding hadoop-core-0.20-append-r1056497.jar

2011-06-05 Thread praveenesh kumar
Hello guys..!!! I am currently working on Hbase 0.90.3 and Hadoop 0.20.2 Since this hadoop version does not support rsync hdfs.. so I copied the *hadoop-core-append jar* file from *hbase/lib* folder into*hadoop folder * and replaced it with* hadoop-0.20.2-core.jar* which was suggested in the fol

Re: Verbose screen logging on hadoop-0.20.203.0

2011-06-05 Thread Shi Yu
I still didn't get it. To make sure I am not using any old version, I downloaded two versions 0.20.2 and 0.20.203.0 again and had a fresh install separately on two independent clusters. I tried with a very simple toy program. I didn't change anything in the API so it probably calls the old API

Link error on http://hadoop.apache.org/common/

2011-06-05 Thread Martin Atukunda
Hi, The "Learn about Hadoop by reading the documentation." link on http://hadoop.apache.org/common/ points to a nonexistent document http://hadoop.apache.org/common/docs/stable/ maybe it should be pointing to http://hadoop.apache.org/common/docs/cu

Re: Verbose screen logging on hadoop-0.20.203.0

2011-06-05 Thread Edward Capriolo
On Sun, Jun 5, 2011 at 1:04 PM, Shi Yu wrote: > We just upgraded from 0.20.2 to hadoop-0.20.203.0 > > Running the same code ends up a massive amount of debug > information on the screen output. Normally this type of > information is written to logs/userlogs directory. However, > nothing is writte

Verbose screen logging on hadoop-0.20.203.0

2011-06-05 Thread Shi Yu
We just upgraded from 0.20.2 to hadoop-0.20.203.0 Running the same code ends up a massive amount of debug information on the screen output. Normally this type of information is written to logs/userlogs directory. However, nothing is written there now and seems everything is outputted to screen

How to split a specified number of rows per Map

2011-06-05 Thread edward choi
Hi, I am using HBase as a source of my MapReduce jobs. I recently found out that TableInputFormat automatically splits the input table so that each region of the table will be assigned to a single Map job. But what I want to do is to split the input table so that user-specified lines of row will

Re: Identifying why a task is taking long on a given hadoop node

2011-06-05 Thread Steve Loughran
On 03/06/2011 12:24, Mayuresh wrote: Hi, I am really having a hard time debugging this. I have a hadoop cluster and one of the maps is taking time. I checked the "datanode" logs and can see no activity for around 10 minutes! The usual cause here is imminent disk failure, as reads start to tak

Re: Backing up namenode

2011-06-05 Thread sulabh choudhury
Hey Mark, If you add more than one directory (comma separated) in the variable "dfs.name.dir" it would automatically be copied in all those location. On Sat, Jun 4, 2011 at 10:14 AM, Mark wrote: > How would I go backing up our namenode data? I set up the secondarynamenode > on a separate physic