Hi Praveen,

This is explained at http://wiki.apache.org/hadoop/HadoopMapReduce
[Map section].

On Thu, Jan 24, 2013 at 10:20 PM, Praveen Sripati
<praveensrip...@gmail.com> wrote:
> Hi,
>
> HDFS splits the file across record boundaries. So, how does the mapper
> processing the second block (b2) determine that the first record is
> incomplete and should process starting from the second record in the block
> (b2)?
>
> Thanks,
> Praveen



-- 
Harsh J

Reply via email to