[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-19 Thread zhaoyunjiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13906435#comment-13906435
 ] 

zhaoyunjiong commented on HDFS-5944:


Thank you Brandon and Benoy.

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt, HDFS-5944.trunk.patch
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13906420#comment-13906420
 ] 

Brandon Li commented on HDFS-5944:
--

The unit test passed in my local test. I will commit the patch shortly.

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt, HDFS-5944.trunk.patch
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13906357#comment-13906357
 ] 

Hadoop QA commented on HDFS-5944:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12629879/HDFS-5944.trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6181//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6181//console

This message is automatically generated.

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt, HDFS-5944.trunk.patch
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13905958#comment-13905958
 ] 

Brandon Li commented on HDFS-5944:
--

+1. Both patches look good to me. 

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-19 Thread zhaoyunjiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13905361#comment-13905361
 ] 

zhaoyunjiong commented on HDFS-5944:


Multiple trailing "/" is impossible.

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-18 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13904249#comment-13904249
 ] 

Benoy Antony commented on HDFS-5944:


Good job in finding and fixing this bug, [~zhaoyunjiong]. 
Would there be multiple trailing "/"  ? If so, removing the last one character 
may not be enough.

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-14 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901718#comment-13901718
 ] 

Brandon Li commented on HDFS-5944:
--

{quote}1. Is it enough for just writing a unit test for 
findLeaseWithPrefixPath?{quote}
Please feel free to include the unit test uploaded yesterday. You can also add 
more test steps, such as using FileSystem object to delete path (e.g., 
"/a/b/../.") as you mentioned. 
{quote}2. In trunk, there is no TestLeaseManager.java, should I add one?{quote}
You can add the unit test to TestLease.java.

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-13 Thread zhaoyunjiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901171#comment-13901171
 ] 

zhaoyunjiong commented on HDFS-5944:


Brandon, thanks for your time to review this patch.
I don't think the user use DFSClient directly.
Even use DistributedFileSystem, we still can send path ending with "/" by 
passing path like this "/a/b/../".
Because in getPathName, String result = makeAbsolute(file).toUri().getPath() 
will return "/a/".

About unit test, I'd be happy to add one. I have two questions need your help:
1. Is it enough for just writing a unit test for findLeaseWithPrefixPath?
2. In trunk, there is no TestLeaseManager.java, should I add one?

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13900739#comment-13900739
 ] 

Brandon Li commented on HDFS-5944:
--

In branch-1, if the directory is deleted by a FileSystem object (e.g., 
fs.delete(...)), the ending "/" will be trimmed before the delete request gets 
to NN. However, as in the uploaded unit test, using DFSClient can send ending 
"/" to NN to trigger the error.

That is, as long as the HDFS users(e.g., Mapreduce) use FileSystem object, they 
should not encounter this problem. Comments?

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch, 
> HDFS-5944.test.txt
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13900611#comment-13900611
 ] 

Brandon Li commented on HDFS-5944:
--

{quote}how could Client C opens file for write after its parent directory is 
deleted{quote}
My bad. Please ignore this part.

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5944) LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right cause SecondaryNameNode failed do checkpoint

2014-02-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13900569#comment-13900569
 ] 

Brandon Li commented on HDFS-5944:
--

[~zhaoyunjiong], thanks for the patch. It would be nice if you can also add a 
unit test.

Maybe I missed something, but how could Client C opens file for write after its 
parent directory is deleted. Do we have another bug here?
{quote}
Client B deleted /XXX/20140206/04_30/
Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
{quote}

> LeaseManager:findLeaseWithPrefixPath didn't handle path like /a/b/ right 
> cause SecondaryNameNode failed do checkpoint
> -
>
> Key: HDFS-5944
> URL: https://issues.apache.org/jira/browse/HDFS-5944
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 2.2.0
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
> Attachments: HDFS-5944-branch-1.2.patch, HDFS-5944.patch
>
>
> In our cluster, we encountered error like this:
> java.io.IOException: saveLeases found path 
> /XXX/20140206/04_30/_SUCCESS.slc.log but is not under construction.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveFilesUnderConstruction(FSNamesystem.java:6217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Saver.save(FSImageFormat.java:607)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveCurrent(FSImage.java:1004)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:949)
> What happened:
> Client A open file /XXX/20140206/04_30/_SUCCESS.slc.log for write.
> And Client A continue refresh it's lease.
> Client B deleted /XXX/20140206/04_30/
> Client C open file /XXX/20140206/04_30/_SUCCESS.slc.log for write
> Client C closed the file /XXX/20140206/04_30/_SUCCESS.slc.log
> Then secondaryNameNode try to do checkpoint and failed due to failed to 
> delete lease hold by Client A when Client B deleted /XXX/20140206/04_30/.
> The reason is a bug in findLeaseWithPrefixPath:
>  int srclen = prefix.length();
>  if (p.length() == srclen || p.charAt(srclen) == Path.SEPARATOR_CHAR) {
> entries.put(entry.getKey(), entry.getValue());
>   }
> Here when prefix is /XXX/20140206/04_30/, and p is 
> /XXX/20140206/04_30/_SUCCESS.slc.log, p.charAt(srcllen) is '_'.
> The fix is simple, I'll upload patch later.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)