Hello,

I'd like to ask you how Hadoop splits text input, if it's size is smaller then HDFS block size.

I'm testing an application, which creates from small input large outputs.

When using NInputSplits input format and setting number of splits in mapred-conf.xml some results are lost during writing output.

When app runs with default TextInput format everything goes OK.

Have you an idea, where the problem should be?

Thanks for your answer.

Reply via email to