Re: Can set different dfs.replication for different dirs

2014-04-29 Thread Meng QingPing
Another question
Can set expired time for /tmp or set yarn/mapreduce to remove the expired
tmp files periodically?

Thanks,
Jack


2014-04-29 16:56 GMT+08:00 Meng QingPing :

> Thanks for all replies.
>
> The files in /tmp most are generated by hadoop jobs. Can set the
> yarn/mapreduce to specify one repication for tmp files?
>
> Thanks,
> Jack
>
>
> 2014-04-29 16:40 GMT+08:00 sudhakara st :
>
> Hello Nitin,
>>
>> HDFS replication factor is always associated with file level. When your
>> copying or creating file for any directory will set to default  replicas.
>>  but you can specify your replication when creating or copying files
>>hadoop fs -D dfs.replication=2 -put foo.txt fsput
>>
>> and in java
>>
>> FileSystem fs = FileSystem.get(new Configuration());
>> fs.setReplication(new Path("hdfs_path:/foldername/filename"), (short)1);
>>
>>
>>
>>
>> On Mon, Apr 28, 2014 at 6:11 PM, Nitin Pawar wrote:
>>
>>> Sudhakar,
>>>
>>> will this set for the new files being written to those directories?
>>>
>>>
>>> On Mon, Apr 28, 2014 at 5:52 PM, sudhakara st wrote:
>>>
>>>> Changes the replication factor of a file. -R option is for recursively
>>>> increasing the replication factor of files within a directory.
>>>>
>>>> Example:
>>>>
>>>>-  hadoop fs -setrep -w 3 -R /user/hadoop/dir1
>>>>
>>>> hadoop dfs -setrep -R -w 1 /dir/
>>>>
>>>>
>>>> On Mon, Apr 28, 2014 at 2:29 PM, Nitin Pawar 
>>>> wrote:
>>>>
>>>>> DFS replication is set on the file level (block level) or at cluster
>>>>> level (if you do not specify the replication factor while writing the file
>>>>> then this one is picked).
>>>>> As per my understanding, there is nothing for directories.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Apr 28, 2014 at 2:12 PM, Meng QingPing wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I want set dfs.replication as 1 for dfs /tmp, set dfs.replication as
>>>>>> 3 for dfs /user. How to configure?  Both /tmp and /user are generated by
>>>>>> mapreduce job or hive or sqoop.
>>>>>>
>>>>>> Thanks,
>>>>>> Jack
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nitin Pawar
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Regards,
>>>> ...sudhakara
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>>
>> --
>>
>> Regards,
>> ...sudhakara
>>
>>
>
>
>
> --
> Thanks,
> Qingping
>



-- 
Thanks,
Qingping


Re: Can set different dfs.replication for different dirs

2014-04-29 Thread Meng QingPing
Thanks for all replies.

The files in /tmp most are generated by hadoop jobs. Can set the
yarn/mapreduce to specify one repication for tmp files?

Thanks,
Jack


2014-04-29 16:40 GMT+08:00 sudhakara st :

> Hello Nitin,
>
> HDFS replication factor is always associated with file level. When your
> copying or creating file for any directory will set to default  replicas.
> but you can specify your replication when creating or copying files
>hadoop fs -D dfs.replication=2 -put foo.txt fsput
>
> and in java
>
> FileSystem fs = FileSystem.get(new Configuration());
> fs.setReplication(new Path("hdfs_path:/foldername/filename"), (short)1);
>
>
>
>
> On Mon, Apr 28, 2014 at 6:11 PM, Nitin Pawar wrote:
>
>> Sudhakar,
>>
>> will this set for the new files being written to those directories?
>>
>>
>> On Mon, Apr 28, 2014 at 5:52 PM, sudhakara st wrote:
>>
>>> Changes the replication factor of a file. -R option is for recursively
>>> increasing the replication factor of files within a directory.
>>>
>>> Example:
>>>
>>>-  hadoop fs -setrep -w 3 -R /user/hadoop/dir1
>>>
>>> hadoop dfs -setrep -R -w 1 /dir/
>>>
>>>
>>> On Mon, Apr 28, 2014 at 2:29 PM, Nitin Pawar wrote:
>>>
>>>> DFS replication is set on the file level (block level) or at cluster
>>>> level (if you do not specify the replication factor while writing the file
>>>> then this one is picked).
>>>> As per my understanding, there is nothing for directories.
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Apr 28, 2014 at 2:12 PM, Meng QingPing wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I want set dfs.replication as 1 for dfs /tmp, set dfs.replication as 3
>>>>> for dfs /user. How to configure?  Both /tmp and /user are generated by
>>>>> mapreduce job or hive or sqoop.
>>>>>
>>>>> Thanks,
>>>>> Jack
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Nitin Pawar
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> Regards,
>>> ...sudhakara
>>>
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>
>
> --
>
> Regards,
> ...sudhakara
>
>



-- 
Thanks,
Qingping


Can set different dfs.replication for different dirs

2014-04-28 Thread Meng QingPing
Hi,

I want set dfs.replication as 1 for dfs /tmp, set dfs.replication as 3 for
dfs /user. How to configure?  Both /tmp and /user are generated by
mapreduce job or hive or sqoop.

Thanks,
Jack


Re: Hadoop dfs upgrade fail when HA enabled

2014-03-21 Thread Meng QingPing
I see, thanks.


2014-03-21 16:20 GMT+08:00 Azuryy Yu :

> It'll be supported in 2.4.
> please look at here:
> https://issues.apache.org/jira/browse/HDFS-5138
>
>
>
> On Fri, Mar 21, 2014 at 3:46 PM, Meng QingPing wrote:
>
>> Hi,
>>
>> Hadoop dfs  upgrade fail when HA enabled. Can Hadoop add feature to
>> upgrade dfs based on HA configure automatically ?
>>
>> Thanks,
>> Jack
>>
>
>


-- 
Thanks,
Qingping


Hadoop dfs upgrade fail when HA enabled

2014-03-21 Thread Meng QingPing
Hi,

Hadoop dfs  upgrade fail when HA enabled. Can Hadoop add feature to upgrade
dfs based on HA configure automatically ?

Thanks,
Jack