This is a knonw behavior (a feature, even).  When yu write on a datanode, it
prefers to put the data on that node because it is local.

To avoid this r
un the put on a non-datanode.

Or do the put with a higher replication and drop the replication after the
put.

Or use distcp if all of the data nodes have access to the same data (perhaps
via nfs).


On 9/12/07 11:11 PM, "ChaoChun Liang" <[EMAIL PROTECTED]> wrote:

> 
> Thanks for your detail example and explanation.
> 
> The problem what I met is, all split blocks stored in the same datanode,
> that is, (A1, A2, A3) stored in the same datanode in your example.
> 
> My test case is putting (by "hadoop fs -put" command) a file about 1GB to
> HDFS
... 
> 
> Is it look something wrong? Or it is the configuration problem. 

Reply via email to