Hello Nitin,

HDFS replication factor is always associated with file level. When your
copying or creating file for any directory will set to default  replicas.
but you can specify your replication when creating or copying files
   hadoop fs -D dfs.replication=2 -put foo.txt fsput

and in java

FileSystem fs = FileSystem.get(new Configuration());
fs.setReplication(new Path("hdfs_path:/foldername/filename"), (short)1);




On Mon, Apr 28, 2014 at 6:11 PM, Nitin Pawar <nitinpawar...@gmail.com>wrote:

> Sudhakar,
>
> will this set for the new files being written to those directories?
>
>
> On Mon, Apr 28, 2014 at 5:52 PM, sudhakara st <sudhakara...@gmail.com>wrote:
>
>> Changes the replication factor of a file. -R option is for recursively
>> increasing the replication factor of files within a directory.
>>
>> Example:
>>
>>    -  hadoop fs -setrep -w 3 -R /user/hadoop/dir1
>>
>> hadoop dfs -setrep -R -w 1 /dir/
>>
>>
>> On Mon, Apr 28, 2014 at 2:29 PM, Nitin Pawar <nitinpawar...@gmail.com>wrote:
>>
>>> DFS replication is set on the file level (block level) or at cluster
>>> level (if you do not specify the replication factor while writing the file
>>> then this one is picked).
>>> As per my understanding, there is nothing for directories.
>>>
>>>
>>>
>>>
>>> On Mon, Apr 28, 2014 at 2:12 PM, Meng QingPing <mqingp...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> I want set dfs.replication as 1 for dfs /tmp, set dfs.replication as 3
>>>> for dfs /user. How to configure?  Both /tmp and /user are generated by
>>>> mapreduce job or hive or sqoop.
>>>>
>>>> Thanks,
>>>> Jack
>>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>>
>> --
>>
>> Regards,
>> ...sudhakara
>>
>>
>
>
>
> --
> Nitin Pawar
>



-- 

Regards,
...sudhakara

Reply via email to