Hi Abhishek,

I fail to understand what you mean by that; but HDFS generally has no
client-exposed file locking on reads. There's leases for preventing
multiple writers to a single file, but nothing on the read side.

Replication of the blocks under a file is a different concept and is
completely unrelated to this.

This needs to be built at your application's/stack's access/control
levels, since HDFS does not provide this.

On Fri, Feb 22, 2013 at 9:33 PM, abhishek <abhishek.dod...@gmail.com> wrote:
> Harsh,
>
> Can we load the file into HDFS with one replication and lock the file.
>
> Regards
> Abhishek
>
>
> On Feb 22, 2013, at 1:03 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> HDFS does not have such a client-side feature, but your applications
>> can use Apache Zookeeper to coordinate and implement this on their own
>> - it can be used to achieve distributed locking. While at ZooKeeper,
>> also checkout https://github.com/Netflix/curator which makes using it
>> for common needs very easy.
>>
>> On Fri, Feb 22, 2013 at 5:17 AM, abhishek <abhishek.dod...@gmail.com> wrote:
>>>
>>>> Hello,
>>>
>>>> How can I impose read lock, for a file in HDFS
>>>>
>>>> So that only one user (or) one application , can access file in hdfs at 
>>>> any point of time.
>>>>
>>>> Regards
>>>> Abhi
>>>
>>> --
>>
>>
>>
>> --
>> Harsh J
>>
>> --
>>
>>
>>



--
Harsh J

Reply via email to