[yankunhad...@gmail.com]
Sent: Tuesday, September 10, 2013 12:27 PM
To: user@hadoop.apache.org
Subject: Re: modify hdfs block size
thank your very much
2013/9/10 Harsh J ha...@cloudera.commailto:ha...@cloudera.com
You cannot change the blocksize (i.e. merge or split) of an existing
file. You can however
You can change it to any size in multiples of 512 bytes by default which is
bytesPerChecksum.
But setting it to lesser values leads to heavy load on cluster and setting
to very high value will not distribute the data. So 64MB or (128MB in
latest trunk.) Is recommended as optimal. Its upto you to
Hi all
Can I modify HDFS data block size is 32MB, I know the default is 64MB
thanks
--
In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code
YanBit
yankunhad...@gmail.com
You cannot change the blocksize (i.e. merge or split) of an existing
file. You can however change it for newer files, and also download and
re-upload older files again with newer blocksize to change it.
On Tue, Sep 10, 2013 at 9:01 AM, kun yan yankunhad...@gmail.com wrote:
Hi all
Can I modify
thank your very much
2013/9/10 Harsh J ha...@cloudera.com
You cannot change the blocksize (i.e. merge or split) of an existing
file. You can however change it for newer files, and also download and
re-upload older files again with newer blocksize to change it.
On Tue, Sep 10, 2013 at 9:01