Are you looking to decrease it to get more parallel map tasks out of
the small files? Are you currently CPU bound on processing these small
files?

On Thu, May 9, 2013 at 9:12 PM, YouPeng Yang <yypvsxf19870...@gmail.com> wrote:
> hi ALL
>
>      I am going to setup a new hadoop  environment, .Because  of  there  are
> lots of small  files, I would  like to change  the  default.block.size to
> 16MB
> other than adopting the ways to merge  the files into large  enough (e.g
> using  sequencefiles).
>     I want to ask are  there  any bad influences or issues?
>
> Regards
>



-- 
Harsh J

Reply via email to