Another question
Can set expired time for /tmp or set yarn/mapreduce to remove the expired
tmp files periodically?

Thanks,
Jack


2014-04-29 16:56 GMT+08:00 Meng QingPing <mqingp...@gmail.com>:

> Thanks for all replies.
>
> The files in /tmp most are generated by hadoop jobs. Can set the
> yarn/mapreduce to specify one repication for tmp files?
>
> Thanks,
> Jack
>
>
> 2014-04-29 16:40 GMT+08:00 sudhakara st <sudhakara...@gmail.com>:
>
> Hello Nitin,
>>
>> HDFS replication factor is always associated with file level. When your
>> copying or creating file for any directory will set to default  replicas.
>>  but you can specify your replication when creating or copying files
>>    hadoop fs -D dfs.replication=2 -put foo.txt fsput
>>
>> and in java
>>
>> FileSystem fs = FileSystem.get(new Configuration());
>> fs.setReplication(new Path("hdfs_path:/foldername/filename"), (short)1);
>>
>>
>>
>>
>> On Mon, Apr 28, 2014 at 6:11 PM, Nitin Pawar <nitinpawar...@gmail.com>wrote:
>>
>>> Sudhakar,
>>>
>>> will this set for the new files being written to those directories?
>>>
>>>
>>> On Mon, Apr 28, 2014 at 5:52 PM, sudhakara st <sudhakara...@gmail.com>wrote:
>>>
>>>> Changes the replication factor of a file. -R option is for recursively
>>>> increasing the replication factor of files within a directory.
>>>>
>>>> Example:
>>>>
>>>>    -  hadoop fs -setrep -w 3 -R /user/hadoop/dir1
>>>>
>>>> hadoop dfs -setrep -R -w 1 /dir/
>>>>
>>>>
>>>> On Mon, Apr 28, 2014 at 2:29 PM, Nitin Pawar 
>>>> <nitinpawar...@gmail.com>wrote:
>>>>
>>>>> DFS replication is set on the file level (block level) or at cluster
>>>>> level (if you do not specify the replication factor while writing the file
>>>>> then this one is picked).
>>>>> As per my understanding, there is nothing for directories.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Apr 28, 2014 at 2:12 PM, Meng QingPing <mqingp...@gmail.com>wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I want set dfs.replication as 1 for dfs /tmp, set dfs.replication as
>>>>>> 3 for dfs /user. How to configure?  Both /tmp and /user are generated by
>>>>>> mapreduce job or hive or sqoop.
>>>>>>
>>>>>> Thanks,
>>>>>> Jack
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nitin Pawar
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Regards,
>>>> ...sudhakara
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>>
>> --
>>
>> Regards,
>> ...sudhakara
>>
>>
>
>
>
> --
> Thanks,
> Qingping
>



-- 
Thanks,
Qingping

Reply via email to