Hi,
I have a simple map-reduce program [map only :) ]that reads the input and
emits the same to n outputs on a single node cluster with max map tasks set
to 10 on a 16 core processor machine.
After a while the tasks begin to fail with the following exception log.
2011-01-01 03:17:52,149 INFO
Looks before comlpeting the file, folder has been deleted.
In HDFS, we will be able to delete the files any time. Application need to take
care about the file comleteness depending on his usage.
Do you have any dfsclient side logs in mapreduce, when exactly delete command
issued?
- Original
Hi,
Thanks for the reply.
There's no delete command issued from the client code. FYR, I have attached
the program that's used to reproduce this bug. The input contains a simple
CSV file with 2 million entries.
Thanks
Sudhan S
On Fri, Nov 4, 2011 at 4:42 PM, Uma Maheswara Rao G 72686
Hello all,
I am getting wildly varying reports of under-replicated blocks from dfsadmin
and fsck. I am wondering what's causing this. hadoop dfsadmin -metasave reports
~1,000 actual blocks awaiting replication and about ~404,000 MISSING blocks
awaiting replication. How do I fix this? Why are