What Shumin told is correct,hadoop configurations has been over written
through client application.

We have faced similar type of issue,Where default replication factor was
mentioned 2 in hadoop configuration.But when when ever the client
 application writes a files,it was having 3 copies in hadoop cluster.Later
on checking client application it's default replica factor has 3.


On Sun, Jul 14, 2013 at 4:51 AM, Shumin Guo <gsmst...@gmail.com> wrote:

> I Think the client side configuration will take effect.
>
> Shumin
> On Jul 12, 2013 11:50 AM, "Shalish VJ" <shalis...@yahoo.com> wrote:
>
>> Hi,
>>
>>
>>     Suppose block size set in configuration file at client side is 64MB,
>> block size set in configuration file at name node side is 128MB and block
>> size set in configuration file at datanode side is something else.
>> Please advice, If the client is writing a file to hdfs,which property
>> would be executed.
>>
>> Thanks,
>> Shalish.
>>
>


-- 
Regards,
Varun Kumar.P

Reply via email to