mapper, change your block size for the files for this
kind of mapper. In HDFS, the block size is at file level. You can set it be
yourself.
Yong
From: zhangyunming1...@gmail.com
Date: Sun, 29 Sep 2013 21:12:40 -0500
Subject: Re: All datanodes are bad IOException when trying to implement
Wouldn't you rather just change your split size so that you can have more
mappers work on your input? What else are you doing in the mappers?
Sent from my iPad
On Sep 30, 2013, at 2:22 AM, yunming zhang zhangyunming1...@gmail.com wrote:
Hi,
I was playing with Hadoop code trying to have a
The number of mappers usually is same as the number of the files you fed to it.
To reduce the number you can use CombineFileInputFormat.
I recently wrote an article about it. You can take a look if this fits your
needs.
Thanks Sonai, Felix, I have researched into combined file format before.
The problem I am trying to solve here is that I want to reduce the number
of mappers running concurrently on a single node. Normally, on a machine
with 8 GB of RAM and 8 Cores, I need to run 8 JVMs(mapper) to exploit 8
core