Hi Harsh

Thank you for the reply.

Actually, the hadoop directory is on my NFS server, every node reads the
same file from NFS server. I think this is not a problem.

I like your second solution. But I am not sure, whether the namenode
will divide those 128MB

 blocks to smaller ones in future or not.

Chen

On Wed, May 4, 2011 at 3:00 PM, Harsh J <ha...@cloudera.com> wrote:

> Your client (put) machine must have the same block size configuration
> during upload as well.
>
> Alternatively, you may do something explicit like `hadoop dfs
> -Ddfs.block.size=size -put file file`
>
> On Thu, May 5, 2011 at 12:59 AM, He Chen <airb...@gmail.com> wrote:
> > Hi all
> >
> > I met a problem about changing block size from 64M to 128M. I am sure I
> > modified the correct configuration file hdfs-site.xml. Because I can
> change
> > the replication number correctly. However, it does not work on block size
> > changing.
> >
> > For example:
> >
> > I change the dfs.block.size to 134217728 bytes.
> >
> > I upload a file which is 128M and use "fsck" to find how many blocks this
> > file has. It shows:
> > /user/file1/file 134217726 bytes, 2 blocks(s): OK
> > 0. blk_xxxxxxxxxx len=67108864 repl=2 [192.168.0.3:50010,
> 192.168.0.32:50010
> > ]
> > 1. blk_xxxxxxxxxx len=67108862 repl=2 [192.168.0.9:50010,
> 192.168.0.8:50010]
> >
> > The hadoop version is 0.21. Any suggestion will be appreciated!
> >
> > thanks
> >
> > Chen
> >
>
>
>
> --
> Harsh J
>

Reply via email to