[ 
https://issues.apache.org/jira/browse/HDFS-8469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556632#comment-14556632
 ] 

Colin Patrick McCabe commented on HDFS-8469:
--------------------------------------------

It looks like this behavior was introduced by HDFS-5138.  I skimmed the 
comments, but I didn't see any discussion of datanode lock file changes.  As 
far as I can tell, the fact that locking is now disabled on the datanode was 
unintentional.  [~atm], [~tlipcon], [~sureshms], any perspective on this?

> Lockfiles are not being created for datanode storage directories
> ----------------------------------------------------------------
>
>                 Key: HDFS-8469
>                 URL: https://issues.apache.org/jira/browse/HDFS-8469
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>
> Lockfiles are not being created for datanode storage directories.  Due to a 
> mixup, we are initializing the StorageDirectory class with shared=true (an 
> option which was only intended for NFS directories used to implement NameNode 
> HA).  Setting shared=true disables lockfile generation and prints a log 
> message like this:
> {code}
> 2015-05-22 11:45:16,367 INFO  common.Storage (Storage.java:lock(675)) - 
> Locking is disabled for 
> /home/cmccabe/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/  
> test/data/dfs/data/data5/current/BP-122766180-127.0.0.1-1432320314834
> {code}
> Without lock files, we could accidentally spawn two datanode processes using 
> the same directories without realizing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to