You can change it to any size in multiples of 512 bytes by default which is
bytesPerChecksum.
But setting it to lesser values leads to heavy load on cluster and setting
to very high value will not distribute the data. So 64MB or (128MB in
latest trunk.) Is recommended as optimal. Its upto you to decide based on
your usecase.

Regards,
Vinayakumar B
On Sep 10, 2013 9:02 AM, "kun yan" <yankunhad...@gmail.com> wrote:

> Hi all
> Can I modify HDFS data block size is 32MB, I know the default is 64MB
> thanks
>
> --
>
> In the Hadoop world, I am just a novice, explore the entire Hadoop
> ecosystem, I hope one day I can contribute their own code
>
> YanBit
> yankunhad...@gmail.com
>
>

Reply via email to