RE: Reader/Writer problem in HDFS

2011-07-28 Thread Laxman
One approach can be use some .tmp extension while writing. Once the write is completed rename back to original file name. Also, reducer has to filter out .tmp files. This will ensure reducer will not pickup the partial files. We do have the similar scenario where the a/m approach resolved the

RE: Error in 9000 and 9001 port in hadoop-0.20.2

2011-07-28 Thread Laxman
Start the namenode[set fs.default.name to hdfs://192.168.1.101:9000] and check your netstat report [netstat -nlp] to check which port and IP it is binding. Ideally, 9000 should be bound to 192.168.1.101. If yes, configure the same IP in slaves as well. Otw, we may need to revisit your configs