Fwd: version 1.1.2 document error:

2013-07-24 Thread
Hello, In the page:http://hadoop.apache.org/docs/r1.1.2/deployment_layout.html - LZO - LZ0 codec from github.com/omally/hadoop-gpl-compression should be https://github.com/omalley/hadoop-gpl-compression I think. there is another

Re: the different size of file through hadoop streaming

2013-01-31 Thread
of the line (excluding the tab character) is the value. However, this can be customized So the output was added a tab (09) to the beginning of each line. Andy 2013/2/1 周梦想 > hello, > I process a file using hadoop streaming. but I found streaming will add > byte 0x09 before 0x0a. So th

the different size of file through hadoop streaming

2013-01-31 Thread
hello, I process a file using hadoop streaming. but I found streaming will add byte 0x09 before 0x0a. So the file is changed after streaming process. some one can tells me why add this byte to output? [zhouhh@Hadoop48 ~]$ ls -l README.txt -rw-r--r-- 1 zhouhh zhouhh 1399 Feb 1 10:53 README.txt [z

Re: Can I switch the IP/host of NN without losing the filesystem?

2013-01-06 Thread
Hi Jianhui, If you only has hadoop and not HBase, maybe there is no problem. Last time we changed IP addresses but not hostname. Our hadoop version is hadoop 1.0.2, HBase is 0.92. At beginning, it's ok. But after about 2 hours, the second name node exited and couldn't start , reported NullPointEx

Re: how to start hadoop 1.0.4 backup node?

2012-12-28 Thread
yes, 1.1.x not appear, but document of 1.0.4 is still there. 2012/12/29 Harsh J > Hi, > > I'd already addressed this via > https://issues.apache.org/jira/browse/HADOOP-7297 and it isn't present > anymore in 1.1.x+ docs. > > On Sat, Dec 29, 2012 at 11:42 AM, 周梦

Re: how to start hadoop 1.0.4 backup node?

2012-12-28 Thread
ocuments to the right set of docs. > > Sent from a mobile device > > On Dec 28, 2012, at 7:13 PM, 周梦想 wrote: > > http://hadoop.apache.org/docs/r1.0.4/hdfs_user_guide.html#Backup+Node > > the document write: > The Backup node is configured in the same manner as the Chec

how to start hadoop 1.0.4 backup node?

2012-12-28 Thread
http://hadoop.apache.org/docs/r1.0.4/hdfs_user_guide.html#Backup+Node the document write: The Backup node is configured in the same manner as the Checkpoint node. It is started with bin/hdfs namenode -checkpoint but hadoop 1.0.4 there is no hdfs file: [zhouhh@Hadoop48 hadoop-1.0.4]$ ls bin hadoop

Re: why not hadoop backup name node data to local disk daily or hourly?

2012-12-24 Thread
u find anything interesting in the NN, SNN, > DN logs? > > And my grandma says, I look like Abhishek > Bachchcan<http://en.wikipedia.org/wiki/Abhishek_Bacchan>;) > > Best Regards, > Tariq > +91-9741563634 > https://mtariq.jux.com/ > > > On Mon, Dec 24, 2012

Re: How to troubleshoot OutOfMemoryError

2012-12-24 Thread
Short term of OutOfMemory :) 2012/12/24 Junior Mint > oom是什么哈哈 > > > On Mon, Dec 24, 2012 at 11:30 AM, 周梦想 wrote: > >> I encountered the OOM problem, because i don't set ulimit open files >> limit. It had nothing to do with Memory. Memory is suffici

Re: why not hadoop backup name node data to local disk daily or hourly?

2012-12-24 Thread
you retired nodes one by > one and changed IPs and brought them back in rotation? Also did you change > IP of your NN as well ? > > > > On Mon, Dec 24, 2012 at 4:10 PM, 周梦想 wrote: > >> Actually the problem was beggining at SecondNameNode. We changed all IPs >> of the Hadoop System > > > > > -- > Nitin Pawar >

Re: why not hadoop backup name node data to local disk daily or hourly?

2012-12-24 Thread
ly out of luck and > loose the entire NN. In such a case you can take his copy of fsimage from > the SNN and retrieve your metadata back. > > HTH > > Best Regards, > Tariq > +91-9741563634 > https://mtariq.jux.com/ > > > On Thu, Dec 20, 2012 at 3:18 PM, 周梦想 wrote: &

Re: why not hadoop backup name node data to local disk daily or hourly?

2012-12-24 Thread
Thanks to Harsh and Mohammad. Because of data crash, I got ill,so reply late... 2012/12/20 Harsh J > Hi, > > On Thu, Dec 20, 2012 at 3:18 PM, 周梦想 wrote: > > Some reasons lead to my name node data error, but the error data also > > overwrite the second name node data, a

Re: How to troubleshoot OutOfMemoryError

2012-12-23 Thread
I encountered the OOM problem, because i don't set ulimit open files limit. It had nothing to do with Memory. Memory is sufficient. Best Regards, Andy 2012/12/22 Manoj Babu > David, > > I faced the same issue due to too much of logging that fills the task > tracker log folder. > > Cheers! > Man

why not hadoop backup name node data to local disk daily or hourly?

2012-12-20 Thread
Some reasons lead to my name node data error, but the error data also overwrite the second name node data, also the NFS backup. I want to recover the name node data a day ago or even a week ago,but I can't. I have to back up name node data manually or write a bash script to backup it? why hadoop d

Re: name node can't startup

2012-12-17 Thread
paste your config > files. > > Best Regards, > Tariq > +91-9741563634 > > > > On Mon, Dec 17, 2012 at 3:21 PM, 周梦想 wrote: > >> Andy > > >

name node can't startup

2012-12-17 Thread
hello, I encountered a problem of hadoop 1.02. At begining, the second name node exited and can't start,it reports error like below: 2012-12-17 17:09:05,646 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.namenode.FSDi