[jira] [Updated] (HDFS-5425) Renaming underconstruction file with snapshots can make NN failure on restart

2013-11-08 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5425:


Attachment: HDFS-5425.patch

oops! Thats ctrl+c ctrl+v problem. ;-)
Thanks uma for pointing out.
Uploading the updated patch.

> Renaming underconstruction file with snapshots can make NN failure on restart
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Vinay
> Attachments: HDFS-5425.patch, HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13818047#comment-13818047
 ] 

Hudson commented on HDFS-5482:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4709 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4709/])
HDFS-5482. DistributedFileSystem#listPathBasedCacheDirectives must support 
relative paths. Contributed by Colin Patrick McCabe. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540257)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCacheAdminConf.xml


> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch, 
> HDFS-5482.003.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5425) Renaming underconstruction file with snapshots can make NN failure on restart

2013-11-08 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-5425:
--

Summary: Renaming underconstruction file with snapshots can make NN failure 
on restart  (was: When doing some snapshot operations,when we restart NN,it is 
shutting down with exception)

> Renaming underconstruction file with snapshots can make NN failure on restart
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Vinay
> Attachments: HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5425) When doing some snapshot operations,when we restart NN,it is shutting down with exception

2013-11-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13818045#comment-13818045
 ] 

Uma Maheswara Rao G commented on HDFS-5425:
---

Thanks Sathish for filing bug and Thanks to Vinay for patch.

Src change straight and looks fine.
Test comment:
{code}
// create file after s0 so that the file should not be included in s0
+DFSTestUtil.createFile(hdfs, bar, BLOCKSIZE, REPL, SEED);
{code}

Looks like this comment is wrong here right?



> When doing some snapshot operations,when we restart NN,it is shutting down 
> with exception
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Vinay
> Attachments: HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5425) When doing some snapshot operations,when we restart NN,it is shutting down with exception

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13818041#comment-13818041
 ] 

Hadoop QA commented on HDFS-5425:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612970/HDFS-5425.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5371//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5371//console

This message is automatically generated.

> When doing some snapshot operations,when we restart NN,it is shutting down 
> with exception
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Vinay
> Attachments: HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNod

[jira] [Updated] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5482:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk.  Thank you for the patch, Colin.

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch, 
> HDFS-5482.003.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5482:


Hadoop Flags: Reviewed

+1 for the patch.  All of my tests were successful with patch version 3.  I'll 
commit this shortly.

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch, 
> HDFS-5482.003.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5425) When doing some snapshot operations,when we restart NN,it is shutting down with exception

2013-11-08 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13818005#comment-13818005
 ] 

Vinay commented on HDFS-5425:
-

exact scenario is 
1. Renaming of open file inside a snapshottable directory after taking a 
snapshot
2. Checkpoint
3. restart NN

> When doing some snapshot operations,when we restart NN,it is shutting down 
> with exception
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Vinay
> Attachments: HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5425) When doing some snapshot operations,when we restart NN,it is shutting down with exception

2013-11-08 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5425:


 Assignee: Vinay  (was: Jing Zhao)
Affects Version/s: 3.0.0
   2.2.0
   Status: Patch Available  (was: Open)

> When doing some snapshot operations,when we restart NN,it is shutting down 
> with exception
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0, 3.0.0
>Reporter: sathish
>Assignee: Vinay
> Attachments: HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5425) When doing some snapshot operations,when we restart NN,it is shutting down with exception

2013-11-08 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5425:


Attachment: HDFS-5425.patch

Attaching a patch with a Unit test. Please review.

> When doing some snapshot operations,when we restart NN,it is shutting down 
> with exception
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: sathish
>Assignee: Jing Zhao
> Attachments: HDFS-5425.patch
>
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5428) under construction files deletion after snapshot+checkpoint+nn restart leads nn safemode

2013-11-08 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817987#comment-13817987
 ] 

Vinay commented on HDFS-5428:
-

bq. Vinay, I think this precondition issue with rename was filed by Sathish 
also sometime back. HDFS-5425. Is that issue same as you are also seeing?
Yes Uma and Jing,
This issue is same as HDFS-5425. I will post a patch with a unit test soon in 
HDFS-5425. Thanks for pointing me to that jira.

> under construction files deletion after snapshot+checkpoint+nn restart leads 
> nn safemode
> 
>
> Key: HDFS-5428
> URL: https://issues.apache.org/jira/browse/HDFS-5428
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5428-v2.patch, HDFS-5428.000.patch, 
> HDFS-5428.001.patch, HDFS-5428.patch
>
>
> 1. allow snapshots under dir /foo
> 2. create a file /foo/test/bar and start writing to it
> 3. create a snapshot s1 under /foo after block is allocated and some data has 
> been written to it
> 4. Delete the directory /foo/test
> 5. wait till checkpoint or do saveNameSpace
> 6. restart NN.
> NN enters to safemode.
> Analysis:
> Snapshot nodes loaded from fsimage are always complete and all blocks will be 
> in COMPLETE state. 
> So when the Datanode reports RBW blocks those will not be updated in 
> blocksmap.
> Some of the FINALIZED blocks will be marked as corrupt due to length mismatch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5488) Clean up TestHftpURLTimeout

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817988#comment-13817988
 ] 

Hadoop QA commented on HDFS-5488:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612945/HDFS-5488.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5368//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5368//console

This message is automatically generated.

> Clean up TestHftpURLTimeout
> ---
>
> Key: HDFS-5488
> URL: https://issues.apache.org/jira/browse/HDFS-5488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5488.000.patch
>
>
> HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
> connections. This jira cleans up TestHftpTimeout and merges its unit tests 
> into TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817956#comment-13817956
 ] 

Hadoop QA commented on HDFS-5482:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612936/HDFS-5482.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5366//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5366//console

This message is automatically generated.

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch, 
> HDFS-5482.003.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5480) Update Balancer for HDFS-2832

2013-11-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-5480:
-

Attachment: h5480_20131108b.patch

h5480_20131108b.patch: fixes all balancer related tests.

Running org.apache.hadoop.hdfs.server.balancer.TestBalancer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.961 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancer
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.533 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.082 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.416 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.72 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.708 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth


> Update Balancer for HDFS-2832
> -
>
> Key: HDFS-5480
> URL: https://issues.apache.org/jira/browse/HDFS-5480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h5480_20131108.patch, h5480_20131108b.patch
>
>
> Block location type is changed from datanode to datanode storage.  Balancer 
> needs to handle it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2832:
-

Attachment: (was: h2832_20131108.patch)

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> h2832_20131023.patch, h2832_20131023b.patch, h2832_20131025.patch, 
> h2832_20131028.patch, h2832_20131028b.patch, h2832_20131029.patch, 
> h2832_20131103.patch, h2832_20131104.patch, h2832_20131105.patch, 
> h2832_20131107b.patch, h2832_20131108.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2832:
-

Attachment: h2832_20131108.patch

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> h2832_20131023.patch, h2832_20131023b.patch, h2832_20131025.patch, 
> h2832_20131028.patch, h2832_20131028b.patch, h2832_20131029.patch, 
> h2832_20131103.patch, h2832_20131104.patch, h2832_20131105.patch, 
> h2832_20131107b.patch, h2832_20131108.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-2832) Enable support for heterogeneous storages in HDFS

2013-11-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2832:
-

Attachment: h2832_20131108.patch

h2832_20131108.patch

> Enable support for heterogeneous storages in HDFS
> -
>
> Key: HDFS-2832
> URL: https://issues.apache.org/jira/browse/HDFS-2832
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: 20130813-HeterogeneousStorage.pdf, H2832_20131107.patch, 
> h2832_20131023.patch, h2832_20131023b.patch, h2832_20131025.patch, 
> h2832_20131028.patch, h2832_20131028b.patch, h2832_20131029.patch, 
> h2832_20131103.patch, h2832_20131104.patch, h2832_20131105.patch, 
> h2832_20131107b.patch, h2832_20131108.patch
>
>
> HDFS currently supports configuration where storages are a list of 
> directories. Typically each of these directories correspond to a volume with 
> its own file system. All these directories are homogeneous and therefore 
> identified as a single storage at the namenode. I propose, change to the 
> current model where Datanode * is a * storage, to Datanode * is a collection 
> * of strorages. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5325) Remove WebHdfsFileSystem#ConnRunner

2013-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817930#comment-13817930
 ] 

Hudson commented on HDFS-5325:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4708 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4708/])
HDFS-5325. Remove WebHdfsFileSystem#ConnRunner. Contributed by Haohui Mai. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540235)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


> Remove WebHdfsFileSystem#ConnRunner
> ---
>
> Key: HDFS-5325
> URL: https://issues.apache.org/jira/browse/HDFS-5325
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5325.000.patch
>
>
> The class WebHdfsFileSystem#ConnRunner is only used in unit tests. There are 
> equivalent class (FsPathRunner / URLRunner) to provide the same functionality.
> This jira removes the class to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5325) Remove WebHdfsFileSystem#ConnRunner

2013-11-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5325:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2.

> Remove WebHdfsFileSystem#ConnRunner
> ---
>
> Key: HDFS-5325
> URL: https://issues.apache.org/jira/browse/HDFS-5325
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.3.0
>
> Attachments: HDFS-5325.000.patch
>
>
> The class WebHdfsFileSystem#ConnRunner is only used in unit tests. There are 
> equivalent class (FsPathRunner / URLRunner) to provide the same functionality.
> This jira removes the class to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5489) Use TokenAspect in WebHDFS

2013-11-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5489:
-

Attachment: HDFS-5489.000.patch

> Use TokenAspect in WebHDFS
> --
>
> Key: HDFS-5489
> URL: https://issues.apache.org/jira/browse/HDFS-5489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5489.000.patch
>
>
> HDFS-5440 provides TokenAspect for both HftpFileSystem and WebHdfsFileSystem 
> to handle the delegation tokens. This jira refactors WebHdfsFileSystem to use 
> TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5489) Use TokenAspect in WebHDFS

2013-11-08 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5489:


 Summary: Use TokenAspect in WebHDFS
 Key: HDFS-5489
 URL: https://issues.apache.org/jira/browse/HDFS-5489
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


HDFS-5440 provides TokenAspect for both HftpFileSystem and WebHdfsFileSystem to 
handle the delegation tokens. This jira refactors WebHdfsFileSystem to use 
TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5469) Add configuration property for the sub-directroy export path

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817926#comment-13817926
 ] 

Hadoop QA commented on HDFS-5469:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612938/HDFS-5469.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.hdfs.nfs.TestReaddir
  org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
  org.apache.hadoop.hdfs.nfs.TestMountd

  The following test timeouts occurred in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs:

org.apache.hadoop.hdfs.nfs.nfs3.TestWrites

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5367//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5367//console

This message is automatically generated.

> Add configuration property for the sub-directroy export path
> 
>
> Key: HDFS-5469
> URL: https://issues.apache.org/jira/browse/HDFS-5469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-5469.001.patch, HDFS-5469.002.patch
>
>
> Currently only HDFS root is exported. Adding this property is the first step 
> to support sub-directory mounting.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5488) Clean up TestHftpURLTimeout

2013-11-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5488:
-

Attachment: HDFS-5488.000.patch

> Clean up TestHftpURLTimeout
> ---
>
> Key: HDFS-5488
> URL: https://issues.apache.org/jira/browse/HDFS-5488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5488.000.patch
>
>
> HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
> connections. This jira cleans up TestHftpTimeout and merges its unit tests 
> into TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5488) Clean up TestHftpURLTimeout

2013-11-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5488:
-

Status: Patch Available  (was: Open)

> Clean up TestHftpURLTimeout
> ---
>
> Key: HDFS-5488
> URL: https://issues.apache.org/jira/browse/HDFS-5488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5488.000.patch
>
>
> HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
> connections. This jira cleans up TestHftpTimeout and merges its unit tests 
> into TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5488) Clean up TestHftpTimeout

2013-11-08 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5488:


 Summary: Clean up TestHftpTimeout
 Key: HDFS-5488
 URL: https://issues.apache.org/jira/browse/HDFS-5488
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
connections. This jira cleans up TestHftpTimeout and merges its unit tests into 
TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5488) Clean up TestHftpURLTimeout

2013-11-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5488:
-

Summary: Clean up TestHftpURLTimeout  (was: Clean up TestHftpTimeout)

> Clean up TestHftpURLTimeout
> ---
>
> Key: HDFS-5488
> URL: https://issues.apache.org/jira/browse/HDFS-5488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> HftpFileSystem uses URLConnectionFactory to set the timeout of each http 
> connections. This jira cleans up TestHftpTimeout and merges its unit tests 
> into TestURLConnectionFactory.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5487) Refactor TestHftpDelegationToken into TestTokenAspect

2013-11-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5487:
-

Attachment: HDFS-5487.000.patch

> Refactor TestHftpDelegationToken into TestTokenAspect
> -
>
> Key: HDFS-5487
> URL: https://issues.apache.org/jira/browse/HDFS-5487
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5487.000.patch
>
>
> HDFS-5440 moves token-related logic to TokenAspect. Therefore, it is 
> appropriate to clean up the unit tests of TestHftpDelegationToken and to move 
> them into TestTokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5487) Refactor TestHftpDelegationToken into TestTokenAspect

2013-11-08 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5487:


 Summary: Refactor TestHftpDelegationToken into TestTokenAspect
 Key: HDFS-5487
 URL: https://issues.apache.org/jira/browse/HDFS-5487
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5487.000.patch

HDFS-5440 moves token-related logic to TokenAspect. Therefore, it is 
appropriate to clean up the unit tests of TestHftpDelegationToken and to move 
them into TestTokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5325) Remove WebHdfsFileSystem#ConnRunner

2013-11-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817912#comment-13817912
 ] 

Jing Zhao commented on HDFS-5325:
-

+1 for the patch. I will commit it soon.

> Remove WebHdfsFileSystem#ConnRunner
> ---
>
> Key: HDFS-5325
> URL: https://issues.apache.org/jira/browse/HDFS-5325
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5325.000.patch
>
>
> The class WebHdfsFileSystem#ConnRunner is only used in unit tests. There are 
> equivalent class (FsPathRunner / URLRunner) to provide the same functionality.
> This jira removes the class to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5440) Extract the logic of handling delegation tokens in HftpFileSystem to the TokenAspect class

2013-11-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817907#comment-13817907
 ] 

Jing Zhao commented on HDFS-5440:
-

The patch looks good to me. One nit is that "TokenManagementDelgator" should be 
renamed to "TokenManagementDelegator". 

Looks like all the other code is just moving/refactoring the code, except that 
now in HftpFileSystem, we no longer maintain two tokens (renewToken and 
delegationToken), which originally have two different types of TokenKind: one 
with "hftp" which is used for token renewal and cancellation, one with "hdfs" 
which is used for normal operation (included in query as token parameter). With 
the patch, we will only have one token with the kind "hftp". 

Looks like the NN server side does not check Token#kind currently. Also the 
patch works fine for normal HftpFileSystem operations (with secure setup). So 
this change looks fine to me. [~daryn], could you please just verify that this 
change is OK? In the meanwhile, we will also test distcp in secured cluster 
with the patch.

If the test goes fine, I will commit the patch next Monday evening in case that 
there is no more comment.


> Extract the logic of handling delegation tokens in HftpFileSystem to the 
> TokenAspect class
> --
>
> Key: HDFS-5440
> URL: https://issues.apache.org/jira/browse/HDFS-5440
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5440.000.patch, HDFS-5440.001.patch
>
>
> The logic of handling delegation token in HftpFileSystem and 
> WebHdfsFileSystem are mostly identical. To simplify the code, this jira 
> proposes to extract the common code into a new class named TokenAspect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5486) Fix TestNameNodeMetrics

2013-11-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-5486:
-

Attachment: HDFS-5486-demo.patch

> Fix TestNameNodeMetrics
> ---
>
> Key: HDFS-5486
> URL: https://issues.apache.org/jira/browse/HDFS-5486
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-5486-demo.patch
>
>
> The test assumes one block report per Datanode. We now send one block report 
> per storage, the test needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5486) Fix TestNameNodeMetrics

2013-11-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817900#comment-13817900
 ] 

Junping Du commented on HDFS-5486:
--

Hi Arpit, as discussed in HDFS-5481, the left part of v3 patch already include 
a fix for it, so I split it as a demo patch and put it here. Please feel free 
to merge with other test fixes. Thanks!

> Fix TestNameNodeMetrics
> ---
>
> Key: HDFS-5486
> URL: https://issues.apache.org/jira/browse/HDFS-5486
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-5486-demo.patch
>
>
> The test assumes one block report per Datanode. We now send one block report 
> per storage, the test needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5469) Add configuration property for the sub-directroy export path

2013-11-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5469:
-

Attachment: HDFS-5469.002.patch

Uploaded a new patch to address Jing's comments.

> Add configuration property for the sub-directroy export path
> 
>
> Key: HDFS-5469
> URL: https://issues.apache.org/jira/browse/HDFS-5469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-5469.001.patch, HDFS-5469.002.patch
>
>
> Currently only HDFS root is exported. Adding this property is the first step 
> to support sub-directory mounting.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5469) Add configuration property for the sub-directroy export path

2013-11-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5469:
-

Status: Patch Available  (was: Open)

> Add configuration property for the sub-directroy export path
> 
>
> Key: HDFS-5469
> URL: https://issues.apache.org/jira/browse/HDFS-5469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-5469.001.patch, HDFS-5469.002.patch
>
>
> Currently only HDFS root is exported. Adding this property is the first step 
> to support sub-directory mounting.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817897#comment-13817897
 ] 

Junping Du commented on HDFS-5481:
--

Ok. I will put the fix patch on HDFS-5486. Thanks!

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817894#comment-13817894
 ] 

Junping Du commented on HDFS-5481:
--

Hi Arpit, Thanks for review and comments! I can deliver a split patch to fix 
TestNameNodeMetrics in this jira if you need.

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5485) add command-line support for modifyDirective

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817896#comment-13817896
 ] 

Hadoop QA commented on HDFS-5485:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612900/HDFS-5485.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5364//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5364//console

This message is automatically generated.

> add command-line support for modifyDirective
> 
>
> Key: HDFS-5485
> URL: https://issues.apache.org/jira/browse/HDFS-5485
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5485.001.patch
>
>
> add command-line support for modifyDirective



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-5481:


Summary: Fix TestDataNodeVolumeFailure  (was: Fix TestDataNodeVolumeFailure 
and TestNameNodeMetrics)

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817893#comment-13817893
 ] 

Arpit Agarwal commented on HDFS-5481:
-

I committed v2 just before you posted v3. I filed a separate bug for 
TestNameNodeMetrics (HDFS-5486).

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817890#comment-13817890
 ] 

Colin Patrick McCabe commented on HDFS-5482:


turns out the place to look for compile errors is:
https://builds.apache.org/job/PreCommit-HDFS-Build/5365/artifact/trunk/patchprocess/patchJavacWarnings.txt
not the console itself

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch, 
> HDFS-5482.003.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5481) Fix TestDataNodeVolumeFailure and TestNameNodeMetrics

2013-11-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817892#comment-13817892
 ] 

Junping Du commented on HDFS-5481:
--

Oops. We are updating JIRA in the same time. :)
Do you commit v2 patch or v3 patch? v3 patch add an additional fix for 
TestNameNodeMetrics.

> Fix TestDataNodeVolumeFailure and TestNameNodeMetrics
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5482:
---

Attachment: HDFS-5482.003.patch

fix compile problem

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch, 
> HDFS-5482.003.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5481.
-

   Resolution: Fixed
Fix Version/s: Heterogeneous Storage (HDFS-2832)
 Hadoop Flags: Reviewed

+1 for the patch. I committed it to branch HDFS-2832. Thanks for the 
contribution Junping!



> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817887#comment-13817887
 ] 

Junping Du commented on HDFS-5481:
--

Add an additional fix for TestNameNodeMetrics in v3 patch.

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5481) Fix TestDataNodeVolumeFailure and TestNameNodeMetrics

2013-11-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-5481:
-

Summary: Fix TestDataNodeVolumeFailure and TestNameNodeMetrics  (was: Fix 
TestDataNodeVolumeFailure)

> Fix TestDataNodeVolumeFailure and TestNameNodeMetrics
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-5481:
-

Attachment: HDFS-5481-v3.patch

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: Heterogeneous Storage (HDFS-2832)
>
> Attachments: HDFS-5481-v2.patch, HDFS-5481-v3.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5486) Fix TestNameNodeMetrics

2013-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817881#comment-13817881
 ] 

Arpit Agarwal commented on HDFS-5486:
-

I have a patch for this issue, trying to see if I can combine any more test 
fixes in the same patch.

> Fix TestNameNodeMetrics
> ---
>
> Key: HDFS-5486
> URL: https://issues.apache.org/jira/browse/HDFS-5486
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The test assumes one block report per Datanode. We now send one block report 
> per storage, the test needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5486) Fix TestNameNodeMetrics

2013-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-5486:


Issue Type: Sub-task  (was: Bug)
Parent: HDFS-2832

> Fix TestNameNodeMetrics
> ---
>
> Key: HDFS-5486
> URL: https://issues.apache.org/jira/browse/HDFS-5486
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The test assumes one block report per Datanode. We now send one block report 
> per storage, the test needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5486) Fix TestNameNodeMetrics

2013-11-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-5486:
---

 Summary: Fix TestNameNodeMetrics
 Key: HDFS-5486
 URL: https://issues.apache.org/jira/browse/HDFS-5486
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The test assumes one block report per Datanode. We now send one block report 
per storage, the test needs to be updated.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5450) better API for getting the cached blocks locations

2013-11-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817866#comment-13817866
 ] 

Colin Patrick McCabe commented on HDFS-5450:


I was thinking about a libhdfs API like this:

{code}
struct hdfsBlockLocs;
struct hdfsBlockLoc;

/** 
 * hdfsGetFileBlockLocs - Get locations where a particular block
 * (determined by position and blocksize) of a file is stored. The last
 * element in the array is NULL. Due to replication, a single block could be
 * present on multiple hosts.
 *
 * @param fs The configured filesystem handle.
 * @param path The path of the file. 
 * @param start The start of the block.
 * @param length The length of the block.
 *
 * @return The array of block locations on success.
 * This must be freed with hdfsFreeBlockLocs.
 * NULL on error.  Errno will be set on error.
 */
struct hdfsBlockLocs *hdfsGetFileBlockLocs(hdfsFS fs,
const char* path, tOffset start, tOffset length);

/** 
 * hdfsFreeBlockLocs - Free an array of block locations.
 *
 * @param arr The array of block locations.
 */
void hdfsFreeBlockLocs(struct hdfsBlockLocs *arr);

/** 
 * hdfsNumBlockLocs - Get the size of an array of block locations.
 *
 * @return The size of the array.  May be 0.
 */
int hdfsNumBlockLocs(const struct hdfsBlockLocs *arr);

/** 
 * hdfsGetBlockLoc - Get a block location from an array of block
 *locations.
 *
 * @param arr The array of block locations.
 * @param idx The entry to get.
 *
 * @return The entry.
 */
const struct hdfsBlockLoc *hdfsGetBlockLoc(
const struct hdfsBlockLocs *arr, int idx);

/** 
 * hdfsBlockLocGetHosts - Get the datanode hostnames at a
 * particular block location.
 *
 * @param loc The block location.
 *
 * @return A NULL-terminated array of hostnames.  This must be freed with
 * hdfsFreeHosts.  This may be empty.
 * NULL on error.  errno will be set on error.
 */
char ***hdfsBlockLocGetHosts(const struct hdfsBlockLoc *loc);

/** 
 * hdfsBlockLocGetCachedHosts - Get the cached datanode hostnames at a
 *   particular block location.
 *
 * @param loc The block location.
 *
 * @return A NULL-terminated array of hostnames.  This must be freed with
 * hdfsFreeHosts.  This may be empty.
 * NULL on error.  errno will be set on error.
 */
char ***hdfsBlockLocGetCachedHosts(const struct hdfsBlockLoc *loc);
{code}

This would be a new addition, deprecating the existing {{hdfsGetHosts}} API.  
(However, we should leave the old API in for compatibility.)

The advantage of the new API is that using opaque data structures, we can add 
new fields later if we want to expose the other things in {{BlockLocation}}.  
We also don't have to do the work of converting everything from JNI "up front" 
if only one block location in the array is accessed.  Finally, it's easier to 
free the whole array if the array itself is represented by a single C object.

> better API for getting the cached blocks locations
> --
>
> Key: HDFS-5450
> URL: https://issues.apache.org/jira/browse/HDFS-5450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
>
> Currently, we have to downcast the {{BlockLocation}] to {{HdfsBlockLocation}} 
> to get information about whether a replica is cached.  We should have this 
> information in {{BlockLocation}} instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817859#comment-13817859
 ] 

Junping Du commented on HDFS-5481:
--

Thanks Arpit for review and comments! Update v2 patch. 

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HDFS-5481-v2.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-5481:
-

Attachment: HDFS-5481-v2.patch

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HDFS-5481-v2.patch, HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817848#comment-13817848
 ] 

Hadoop QA commented on HDFS-5482:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612894/HDFS-5482.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5365//console

This message is automatically generated.

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5479) Fix test failures in Balancer.

2013-11-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HDFS-5479.
--

Resolution: Duplicate

> Fix test failures in Balancer.
> --
>
> Key: HDFS-5479
> URL: https://issues.apache.org/jira/browse/HDFS-5479
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>
> Many tests failures w.r.t balancer as 
> https://builds.apache.org/job/PreCommit-HDFS-Build/5360/#showFailuresLink 
> shows. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5479) Fix test failures in Balancer.

2013-11-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817846#comment-13817846
 ] 

Junping Du commented on HDFS-5479:
--

Given HDFS-5480 is created almost at the same time and Nicholas already deliver 
a patch for it. I will mark this JIRA as duplicated.

> Fix test failures in Balancer.
> --
>
> Key: HDFS-5479
> URL: https://issues.apache.org/jira/browse/HDFS-5479
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>
> Many tests failures w.r.t balancer as 
> https://builds.apache.org/job/PreCommit-HDFS-Build/5360/#showFailuresLink 
> shows. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5480) Update Balancer for HDFS-2832

2013-11-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817844#comment-13817844
 ] 

Junping Du commented on HDFS-5480:
--

Good to know it. Thanks Nicholas!
So I will mark HDFS-5479 as duplicated and will help to review the patch when 
it is ready. Thanks!

> Update Balancer for HDFS-2832
> -
>
> Key: HDFS-5480
> URL: https://issues.apache.org/jira/browse/HDFS-5480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h5480_20131108.patch
>
>
> Block location type is changed from datanode to datanode storage.  Balancer 
> needs to handle it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817840#comment-13817840
 ] 

Colin Patrick McCabe commented on HDFS-5482:


I'm not sure why this failed.  I found the following errors in the console:

{code}
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/dev-support/test-patch.sh:
 line 371: cd: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/conf: No such 
file or directory
...
cp: cannot stat `/home/jenkins/buildSupport/lib/*': No such file or directory
{code}

This looks like an automation issue, so I will re-trigger the build.

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5366) recaching improvements

2013-11-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817838#comment-13817838
 ] 

Colin Patrick McCabe commented on HDFS-5366:


The eclipse:eclipse target failure doesn't have anything to do with this patch. 
 This is also causing the bogus release audit:

{code}
 !? hs_err_pid4577.log
Lines that start with ? in the release audit report indicate files that do 
not have an Apache license header.
{code}

> recaching improvements
> --
>
> Key: HDFS-5366
> URL: https://issues.apache.org/jira/browse/HDFS-5366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5366-caching.001.patch, HDFS-5366.002.patch
>
>
> There are a few things about our HDFS-4949 recaching strategy that could be 
> improved.
> * We should monitor the DN's maximum and current mlock'ed memory consumption 
> levels, so that we don't ask the DN to do stuff it can't.
> * We should not try to initiate caching on stale or decomissioning DataNodes 
> (although we should not recache things stored on such nodes until they're 
> declared dead).
> * We might want to resend the {{DNA_CACHE}} or {{DNA_UNCACHE}} command a few 
> times before giving up.  Currently, we only send it once.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5469) Add configuration property for the sub-directroy export path

2013-11-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817821#comment-13817821
 ] 

Jing Zhao commented on HDFS-5469:
-

The patch looks good to me. One minor is that we may no longer need to pass an 
export list into the Nfs3 constructor. +1 once this is addressed.

> Add configuration property for the sub-directroy export path
> 
>
> Key: HDFS-5469
> URL: https://issues.apache.org/jira/browse/HDFS-5469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-5469.001.patch
>
>
> Currently only HDFS root is exported. Adding this property is the first step 
> to support sub-directory mounting.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5469) Add configuration property for the sub-directroy export path

2013-11-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5469:
-

Attachment: HDFS-5469.001.patch

> Add configuration property for the sub-directroy export path
> 
>
> Key: HDFS-5469
> URL: https://issues.apache.org/jira/browse/HDFS-5469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-5469.001.patch
>
>
> Currently only HDFS root is exported. Adding this property is the first step 
> to support sub-directory mounting.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5473) Consistent naming of user-visible caching classes and methods

2013-11-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817761#comment-13817761
 ] 

Andrew Wang commented on HDFS-5473:
---

Colin brought up renaming the PBCD to just CD, since it's less of a mouthful.

> Consistent naming of user-visible caching classes and methods
> -
>
> Key: HDFS-5473
> URL: https://issues.apache.org/jira/browse/HDFS-5473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> It's kind of warty that (after HDFS-5326 goes in) DistributedFileSystem has 
> {{*CachePool}} methods take a {{CachePoolInfo}} and 
> {{*PathBasedCacheDirective}} methods that thake a 
> {{PathBasedCacheDirective}}. We should consider renaming {{CachePoolInfo}} to 
> {{CachePool}} for consistency.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5450) better API for getting the cached blocks locations

2013-11-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817747#comment-13817747
 ] 

Colin Patrick McCabe commented on HDFS-5450:


We'll also need {{libhdfs}} support.  Currently this is the {{libhdfs}} API for 
getting block locations:

{code}
char*** hdfsGetHosts(hdfsFS fs, const char* path,  
tOffset start, tOffset length);
{code}

We probably need to add a new API that returns structures rather than strings, 
so that this and other information can be exposed to {{libhdfs}} users.

> better API for getting the cached blocks locations
> --
>
> Key: HDFS-5450
> URL: https://issues.apache.org/jira/browse/HDFS-5450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Andrew Wang
>Priority: Minor
>
> Currently, we have to downcast the {{BlockLocation}] to {{HdfsBlockLocation}} 
> to get information about whether a replica is cached.  We should have this 
> information in {{BlockLocation}} instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5485) add command-line support for modifyDirective

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5485:
---

Status: Patch Available  (was: Open)

> add command-line support for modifyDirective
> 
>
> Key: HDFS-5485
> URL: https://issues.apache.org/jira/browse/HDFS-5485
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5485.001.patch
>
>
> add command-line support for modifyDirective



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5485) add command-line support for modifyDirective

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5485:
---

Attachment: HDFS-5485.001.patch

> add command-line support for modifyDirective
> 
>
> Key: HDFS-5485
> URL: https://issues.apache.org/jira/browse/HDFS-5485
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5485.001.patch
>
>
> add command-line support for modifyDirective



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5366) recaching improvements

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817737#comment-13817737
 ] 

Hadoop QA commented on HDFS-5366:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612870/HDFS-5366.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5362//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5362//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5362//console

This message is automatically generated.

> recaching improvements
> --
>
> Key: HDFS-5366
> URL: https://issues.apache.org/jira/browse/HDFS-5366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5366-caching.001.patch, HDFS-5366.002.patch
>
>
> There are a few things about our HDFS-4949 recaching strategy that could be 
> improved.
> * We should monitor the DN's maximum and current mlock'ed memory consumption 
> levels, so that we don't ask the DN to do stuff it can't.
> * We should not try to initiate caching on stale or decomissioning DataNodes 
> (although we should not recache things stored on such nodes until they're 
> declared dead).
> * We might want to resend the {{DNA_CACHE}} or {{DNA_UNCACHE}} command a few 
> times before giving up.  Currently, we only send it once.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5485) add command-line support for modifyDirective

2013-11-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817729#comment-13817729
 ] 

Colin Patrick McCabe commented on HDFS-5485:


we should also display replication when listing directives.

> add command-line support for modifyDirective
> 
>
> Key: HDFS-5485
> URL: https://issues.apache.org/jira/browse/HDFS-5485
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> add command-line support for modifyDirective



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817728#comment-13817728
 ] 

Hadoop QA commented on HDFS-5482:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612894/HDFS-5482.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5363//console

This message is automatically generated.

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5371) Let client retry the same NN when "dfs.client.test.drop.namenode.response.number" is enabled

2013-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817727#comment-13817727
 ] 

Hudson commented on HDFS-5371:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4706 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4706/])
HDFS-5371. Let client retry the same NN when 
dfs.client.test.drop.namenode.response.number is enabled. Contributed by Jing 
Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540197)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/LossyRetryInvocationHandler.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Let client retry the same NN when 
> "dfs.client.test.drop.namenode.response.number" is enabled
> 
>
> Key: HDFS-5371
> URL: https://issues.apache.org/jira/browse/HDFS-5371
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, test
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HDFS-5371.000.patch
>
>
> Currently when dfs.client.test.drop.namenode.response.number is enabled for 
> testing, the client will start failover and try the other NN. But in most of 
> the testing cases we do not need to trigger the client failover here since if 
> the drop-response number is >1 the next response received from the other NN 
> will also be dropped. We can let the client just simply retry the same NN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5371) Let client retry the same NN when "dfs.client.test.drop.namenode.response.number" is enabled

2013-11-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5371:


   Resolution: Fixed
Fix Version/s: 2.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the review Suresh! I've committed this to trunk and branch-2.

> Let client retry the same NN when 
> "dfs.client.test.drop.namenode.response.number" is enabled
> 
>
> Key: HDFS-5371
> URL: https://issues.apache.org/jira/browse/HDFS-5371
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, test
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HDFS-5371.000.patch
>
>
> Currently when dfs.client.test.drop.namenode.response.number is enabled for 
> testing, the client will start failover and try the other NN. But in most of 
> the testing cases we do not need to trigger the client failover here since if 
> the drop-response number is >1 the next response received from the other NN 
> will also be dropped. We can let the client just simply retry the same NN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4114) Deprecate the BackupNode and CheckpointNode in 2.0

2013-11-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817717#comment-13817717
 ] 

Suresh Srinivas commented on HDFS-4114:
---

[~shv], its been a while since there has been a discussion about this jira. I 
do not know of many people using BackupNode. Is this a good time to start the 
discussion about deprecate and remove support for BackupNode? Do you still 
think we need to retain the support for BackupNode?

The main reason for this is - if this functionality is not used by anyone, 
maintaining it adds unnecessary work. As an example, when I added support for 
retry cache there were bunch of code paths related to backupnode that added 
unnecessary work.

> Deprecate the BackupNode and CheckpointNode in 2.0
> --
>
> Key: HDFS-4114
> URL: https://issues.apache.org/jira/browse/HDFS-4114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eli Collins
>Assignee: Eli Collins
>
> Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the 
> BackupNode and CheckpointNode.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5482:
---

Attachment: HDFS-5482.002.patch

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch, HDFS-5482.002.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817704#comment-13817704
 ] 

Colin Patrick McCabe commented on HDFS-5482:


Yeah, you're absolutely right.  It looks like we already try to do that in 
{{DistributedFileSystem#listDirectives}} by calling {{makeQualified}}, but 
there is an error in the parameters given to that function which is preventing 
it from working as expected.  (It should be passing the working directory.)  We 
should probably just change it to use {{makeQualified}} for consistency with 
the other functions in {{DistributedFileSystem.java}}.  And you're right, this 
was a regression introduced by 5326.  It will be good to have a unit test for 
this.

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5371) Let client retry the same NN when "dfs.client.test.drop.namenode.response.number" is enabled

2013-11-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817701#comment-13817701
 ] 

Suresh Srinivas commented on HDFS-5371:
---

+1 for the patch.

> Let client retry the same NN when 
> "dfs.client.test.drop.namenode.response.number" is enabled
> 
>
> Key: HDFS-5371
> URL: https://issues.apache.org/jira/browse/HDFS-5371
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, test
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-5371.000.patch
>
>
> Currently when dfs.client.test.drop.namenode.response.number is enabled for 
> testing, the client will start failover and try the other NN. But in most of 
> the testing cases we do not need to trigger the client failover here since if 
> the drop-response number is >1 the next response received from the other NN 
> will also be dropped. We can let the client just simply retry the same NN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5371) Let client retry the same NN when "dfs.client.test.drop.namenode.response.number" is enabled

2013-11-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-5371:
--

Component/s: test
 ha

> Let client retry the same NN when 
> "dfs.client.test.drop.namenode.response.number" is enabled
> 
>
> Key: HDFS-5371
> URL: https://issues.apache.org/jira/browse/HDFS-5371
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, test
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-5371.000.patch
>
>
> Currently when dfs.client.test.drop.namenode.response.number is enabled for 
> testing, the client will start failover and try the other NN. But in most of 
> the testing cases we do not need to trigger the client failover here since if 
> the drop-response number is >1 the next response received from the other NN 
> will also be dropped. We can let the client just simply retry the same NN.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5428) under construction files deletion after snapshot+checkpoint+nn restart leads nn safemode

2013-11-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817693#comment-13817693
 ] 

Jing Zhao commented on HDFS-5428:
-

bq. One more case needs to be handled. 

Vinay, could you please provide more detailed steps to reproduce this (e.g., a 
unit test)? I think we can try to fix this in HDFS-5425.

> under construction files deletion after snapshot+checkpoint+nn restart leads 
> nn safemode
> 
>
> Key: HDFS-5428
> URL: https://issues.apache.org/jira/browse/HDFS-5428
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5428-v2.patch, HDFS-5428.000.patch, 
> HDFS-5428.001.patch, HDFS-5428.patch
>
>
> 1. allow snapshots under dir /foo
> 2. create a file /foo/test/bar and start writing to it
> 3. create a snapshot s1 under /foo after block is allocated and some data has 
> been written to it
> 4. Delete the directory /foo/test
> 5. wait till checkpoint or do saveNameSpace
> 6. restart NN.
> NN enters to safemode.
> Analysis:
> Snapshot nodes loaded from fsimage are always complete and all blocks will be 
> in COMPLETE state. 
> So when the Datanode reports RBW blocks those will not be updated in 
> blocksmap.
> Some of the FINALIZED blocks will be marked as corrupt due to length mismatch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817687#comment-13817687
 ] 

Hadoop QA commented on HDFS-5482:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12612866/HDFS-5482.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5361//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5361//console

This message is automatically generated.

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5482) DistributedFileSystem#listPathBasedCacheDirectives must support relative paths

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5482:
---

Summary: DistributedFileSystem#listPathBasedCacheDirectives must support 
relative paths  (was: CacheAdmin -removeDirectives fails on relative paths but 
-addDirective allows them)

> DistributedFileSystem#listPathBasedCacheDirectives must support relative paths
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5485) add command-line support for modifyDirective

2013-11-08 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5485:
--

 Summary: add command-line support for modifyDirective
 Key: HDFS-5485
 URL: https://issues.apache.org/jira/browse/HDFS-5485
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5482) CacheAdmin -removeDirectives fails on relative paths but -addDirective allows them

2013-11-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817679#comment-13817679
 ] 

Chris Nauroth commented on HDFS-5482:
-

-listDirectives with a path filter has the same problem.  (See below.)

I think the fix needs to go in 
{{DistributedFileSystem#listPathBasedCacheDescriptors}}.  Before HDFS-5326, we 
had a call to {{FileSystem#fixRelativePart}} in there.  Fixing it in 
{{DistributedFileSystem#listPathBasedCacheDescriptors}} would cover both 
-listDirectives and -removeDirectives as well as potential future callers.

{code}
[cnauroth@ubuntu:pts/0] hadoop-deploy-trunk 

> hadoop-3.0.0-SNAPSHOT/bin/hdfs cacheadmin -addDirective -path relative1 -pool 
> pool1
Added PathBasedCache entry 1
> hadoop-3.0.0-SNAPSHOT/bin/hdfs cacheadmin -listDirectives
Found 1 entry
ID  POOL   PATH 
1   pool1  /user/cnauroth/relative1 
[cnauroth@ubuntu:pts/0] hadoop-deploy-trunk 

> hadoop-3.0.0-SNAPSHOT/bin/hdfs cacheadmin -listDirectives -path relative1
Exception in thread "main" java.lang.IllegalArgumentException: 
java.net.URISyntaxException: Relative path in absolute URI: 
hdfs://localhost:29000relative1/relative1
at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$ListPathBasedCacheDirectiveCommand.run(CacheAdmin.java:358)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
hdfs://localhost:29000relative1/relative1
at java.net.URI.checkPath(URI.java:1804)
at java.net.URI.(URI.java:752)
at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
... 4 more
{code}



> CacheAdmin -removeDirectives fails on relative paths but -addDirective allows 
> them
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5485) add command-line support for modifyDirective

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5485:
---

Description: add command-line support for modifyDirective

> add command-line support for modifyDirective
> 
>
> Key: HDFS-5485
> URL: https://issues.apache.org/jira/browse/HDFS-5485
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> add command-line support for modifyDirective



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5366) recaching improvements

2013-11-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817613#comment-13817613
 ] 

Colin Patrick McCabe commented on HDFS-5366:


here's a new patch incorporating Chris' fix.

The overall idea here is to keep lists of replicas to cache/uncache around 
until the DN replies and says that they've been acted on.  This is different 
than the current scheme, where they are "fire and forget."

To prevent re-sending these commands too often, this introduces a per-DN timer 
which sets the maximum rate at which commands can be re-sent.  (This timer can 
be overridden by the cache rescanner thread changing what should be cached, 
though.)

> recaching improvements
> --
>
> Key: HDFS-5366
> URL: https://issues.apache.org/jira/browse/HDFS-5366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5366-caching.001.patch, HDFS-5366.002.patch
>
>
> There are a few things about our HDFS-4949 recaching strategy that could be 
> improved.
> * We should monitor the DN's maximum and current mlock'ed memory consumption 
> levels, so that we don't ask the DN to do stuff it can't.
> * We should not try to initiate caching on stale or decomissioning DataNodes 
> (although we should not recache things stored on such nodes until they're 
> declared dead).
> * We might want to resend the {{DNA_CACHE}} or {{DNA_UNCACHE}} command a few 
> times before giving up.  Currently, we only send it once.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5172) Handle race condition for writes

2013-11-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HDFS-5172.
--

Resolution: Fixed

This issue is fixed along with the fix to HDFS-5364. Solve it as a dup.

> Handle race condition for writes
> 
>
> Key: HDFS-5172
> URL: https://issues.apache.org/jira/browse/HDFS-5172
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> When an unstable write arrives, the following happens: 
> 1. retrieves the OpenFileCtx
> 2. create asyn task to write it to hdfs
> The race is that, the OpenFileCtx could be closed by the StreamMonitor. Then 
> step 2 will simply return an error to the client.
> This is OK before streaming is supported. To support data streaming, the file 
> needs to be reopened.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5366) recaching improvements

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5366:
---

 Target Version/s: 3.0.0  (was: HDFS-4949)
Affects Version/s: (was: HDFS-4949)
   3.0.0

> recaching improvements
> --
>
> Key: HDFS-5366
> URL: https://issues.apache.org/jira/browse/HDFS-5366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5366-caching.001.patch, HDFS-5366.002.patch
>
>
> There are a few things about our HDFS-4949 recaching strategy that could be 
> improved.
> * We should monitor the DN's maximum and current mlock'ed memory consumption 
> levels, so that we don't ask the DN to do stuff it can't.
> * We should not try to initiate caching on stale or decomissioning DataNodes 
> (although we should not recache things stored on such nodes until they're 
> declared dead).
> * We might want to resend the {{DNA_CACHE}} or {{DNA_UNCACHE}} command a few 
> times before giving up.  Currently, we only send it once.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5366) recaching improvements

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5366:
---

Attachment: HDFS-5366.002.patch

> recaching improvements
> --
>
> Key: HDFS-5366
> URL: https://issues.apache.org/jira/browse/HDFS-5366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5366-caching.001.patch, HDFS-5366.002.patch
>
>
> There are a few things about our HDFS-4949 recaching strategy that could be 
> improved.
> * We should monitor the DN's maximum and current mlock'ed memory consumption 
> levels, so that we don't ask the DN to do stuff it can't.
> * We should not try to initiate caching on stale or decomissioning DataNodes 
> (although we should not recache things stored on such nodes until they're 
> declared dead).
> * We might want to resend the {{DNA_CACHE}} or {{DNA_UNCACHE}} command a few 
> times before giving up.  Currently, we only send it once.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5366) recaching improvements

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5366:
---

Status: Patch Available  (was: In Progress)

> recaching improvements
> --
>
> Key: HDFS-5366
> URL: https://issues.apache.org/jira/browse/HDFS-5366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5366-caching.001.patch, HDFS-5366.002.patch
>
>
> There are a few things about our HDFS-4949 recaching strategy that could be 
> improved.
> * We should monitor the DN's maximum and current mlock'ed memory consumption 
> levels, so that we don't ask the DN to do stuff it can't.
> * We should not try to initiate caching on stale or decomissioning DataNodes 
> (although we should not recache things stored on such nodes until they're 
> declared dead).
> * We might want to resend the {{DNA_CACHE}} or {{DNA_UNCACHE}} command a few 
> times before giving up.  Currently, we only send it once.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5480) Update Balancer for HDFS-2832

2013-11-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817589#comment-13817589
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-5480:
--

Hi Junping, I was trying to fix balancer related tests so that we could merge 
the branch.  So I think there is only a little overlap with HDFS-4989.

The balancer did not have storage information so that I added 
getDatanodeReportWithStorage.  It seems that it may be better to just add 
datanode information to BlockWithLocations.  I will try it.

> Update Balancer for HDFS-2832
> -
>
> Key: HDFS-5480
> URL: https://issues.apache.org/jira/browse/HDFS-5480
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h5480_20131108.patch
>
>
> Block location type is changed from datanode to datanode storage.  Balancer 
> needs to handle it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5484) StorageType and State in DatanodeStorageInfo in NameNode is not accurate

2013-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817584#comment-13817584
 ] 

Arpit Agarwal commented on HDFS-5484:
-

I made it a sub-task of HDFS-2832. Thanks Eric.

> StorageType and State in DatanodeStorageInfo in NameNode is not accurate
> 
>
> Key: HDFS-5484
> URL: https://issues.apache.org/jira/browse/HDFS-5484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Eric Sirianni
>
> The fields in DatanodeStorageInfo are updated from two distinct paths:
> # block reports
> # storage reports (via heartbeats)
> The {{state}} and {{storageType}} fields are updated via the Block Report.  
> However, as seen in the code blow, these fields are populated from a "dummy" 
> {{DatanodeStorage}} object constructed in the DataNode:
> {code}
> BPServiceActor.blockReport() {
> //...
> // Dummy DatanodeStorage object just for sending the block report.
> DatanodeStorage dnStorage = new DatanodeStorage(storageID);
> //...
> }
> {code}
> The net effect is that the {{state}} and {{storageType}} fields are always 
> the default of {{NORMAL}} and {{DISK}} in the NameNode.
> The recommended fix is to change {{FsDatasetSpi.getBlockReports()}} from:
> {code}
> public Map getBlockReports(String bpid);
> {code}
> to:
> {code}
> public Map getBlockReports(String bpid);
> {code}
> thereby allowing {{BPServiceActor}} to send the "real" {{DatanodeStorage}} 
> object with the block report.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5484) StorageType and State in DatanodeStorageInfo in NameNode is not accurate

2013-11-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-5484:


Issue Type: Sub-task  (was: Bug)
Parent: HDFS-2832

> StorageType and State in DatanodeStorageInfo in NameNode is not accurate
> 
>
> Key: HDFS-5484
> URL: https://issues.apache.org/jira/browse/HDFS-5484
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Eric Sirianni
>
> The fields in DatanodeStorageInfo are updated from two distinct paths:
> # block reports
> # storage reports (via heartbeats)
> The {{state}} and {{storageType}} fields are updated via the Block Report.  
> However, as seen in the code blow, these fields are populated from a "dummy" 
> {{DatanodeStorage}} object constructed in the DataNode:
> {code}
> BPServiceActor.blockReport() {
> //...
> // Dummy DatanodeStorage object just for sending the block report.
> DatanodeStorage dnStorage = new DatanodeStorage(storageID);
> //...
> }
> {code}
> The net effect is that the {{state}} and {{storageType}} fields are always 
> the default of {{NORMAL}} and {{DISK}} in the NameNode.
> The recommended fix is to change {{FsDatasetSpi.getBlockReports()}} from:
> {code}
> public Map getBlockReports(String bpid);
> {code}
> to:
> {code}
> public Map getBlockReports(String bpid);
> {code}
> thereby allowing {{BPServiceActor}} to send the "real" {{DatanodeStorage}} 
> object with the block report.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5482) CacheAdmin -removeDirectives fails on relative paths but -addDirective allows them

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5482:
---

Attachment: HDFS-5482.001.patch

It looks like {{removeDirectives}} needs to call {{makeQualified}} on the path.

I also added a test of using relative paths with {{removeDirectives}}.

> CacheAdmin -removeDirectives fails on relative paths but -addDirective allows 
> them
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
> Attachments: HDFS-5482.001.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5482) CacheAdmin -removeDirectives fails on relative paths but -addDirective allows them

2013-11-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-5482:
---

Assignee: Colin Patrick McCabe
Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

> CacheAdmin -removeDirectives fails on relative paths but -addDirective allows 
> them
> --
>
> Key: HDFS-5482
> URL: https://issues.apache.org/jira/browse/HDFS-5482
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-5482.001.patch
>
>
> CacheAdmin -addDirective allows using a relative path.
> However, -removeDirectives will error complaining with 
> "java.net.URISyntaxException: Relative path in absolute URI"
> {code}
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
> Added PathBasedCache entry 3
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
> Found 1 entry
> ID  POOL  PATH   
> 3   schu  /user/schu/foo 
> [schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
> Exception in thread "main" java.lang.IllegalArgumentException: 
> java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
>   at 
> org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
>   at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
>   at java.net.URI.checkPath(URI.java:1788)
>   at java.net.URI.(URI.java:734)
>   at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
>   ... 4 more
> [schu@hdfs-c5-nfs ~]$ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5425) When doing some snapshot operations,when we restart NN,it is shutting down with exception

2013-11-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao reassigned HDFS-5425:
---

Assignee: Jing Zhao

> When doing some snapshot operations,when we restart NN,it is shutting down 
> with exception
> -
>
> Key: HDFS-5425
> URL: https://issues.apache.org/jira/browse/HDFS-5425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: sathish
>Assignee: Jing Zhao
>
> I faced this When i am doing some snapshot operations like 
> createSnapshot,renameSnapshot,i restarted my NN,it is shutting down with 
> exception,
> 2013-10-24 21:07:03,040 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:133)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.replace(INodeDirectoryWithSnapshot.java:82)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$ChildrenDiff.access$700(INodeDirectoryWithSnapshot.java:62)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.replaceChild(INodeDirectoryWithSnapshot.java:397)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot$DirectoryDiffList.access$900(INodeDirectoryWithSnapshot.java:376)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectoryWithSnapshot.replaceChild(INodeDirectoryWithSnapshot.java:598)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedReplaceINodeFile(FSDirectory.java:1548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.replaceINodeFile(FSDirectory.java:1537)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadFilesUnderConstruction(FSImageFormat.java:855)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:350)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:899)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:751)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:720)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:266)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:784)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:563)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:422)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:472)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1245)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1311)
> 2013-10-24 21:07:03,050 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2013-10-24 21:07:03,052 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG: 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5484) StorageType and State in DatanodeStorageInfo in NameNode is not accurate

2013-11-08 Thread Eric Sirianni (JIRA)
Eric Sirianni created HDFS-5484:
---

 Summary: StorageType and State in DatanodeStorageInfo in NameNode 
is not accurate
 Key: HDFS-5484
 URL: https://issues.apache.org/jira/browse/HDFS-5484
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Eric Sirianni


The fields in DatanodeStorageInfo are updated from two distinct paths:
# block reports
# storage reports (via heartbeats)

The {{state}} and {{storageType}} fields are updated via the Block Report.  
However, as seen in the code blow, these fields are populated from a "dummy" 
{{DatanodeStorage}} object constructed in the DataNode:
{code}
BPServiceActor.blockReport() {
//...
// Dummy DatanodeStorage object just for sending the block report.
DatanodeStorage dnStorage = new DatanodeStorage(storageID);
//...
}
{code}

The net effect is that the {{state}} and {{storageType}} fields are always the 
default of {{NORMAL}} and {{DISK}} in the NameNode.

The recommended fix is to change {{FsDatasetSpi.getBlockReports()}} from:
{code}
public Map getBlockReports(String bpid);
{code}
to:
{code}
public Map getBlockReports(String bpid);
{code}
thereby allowing {{BPServiceActor}} to send the "real" {{DatanodeStorage}} 
object with the block report.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5483) Make reportDiff resilient to malformed block reports

2013-11-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-5483:
---

 Summary: Make reportDiff resilient to malformed block reports
 Key: HDFS-5483
 URL: https://issues.apache.org/jira/browse/HDFS-5483
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Arpit Agarwal


{{BlockManager#reportDiff}} can cause an assertion failure in 
{{BlockInfo#moveBlockToHead}} if the block report shows the same block as 
belonging to more than one storage.

The issue is that {{moveBlockToHead}} assumes it will find the 
DatanodeStorageInfo for the given block.

Exception details:
{code}
java.lang.AssertionError: Index is out of bound
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.setNext(BlockInfo.java:152)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.moveBlockToHead(BlockInfo.java:351)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.moveBlockToHead(DatanodeStorageInfo.java:243)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1841)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1709)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1637)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:984)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testVolumeFailure(TestDataNodeVolumeFailure.java:165)
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5428) under construction files deletion after snapshot+checkpoint+nn restart leads nn safemode

2013-11-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817544#comment-13817544
 ] 

Uma Maheswara Rao G commented on HDFS-5428:
---

Vinay, I think this precondition issue with rename was filed by Sathish also 
sometime back. HDFS-5425. Is that issue same as you are also seeing?

> under construction files deletion after snapshot+checkpoint+nn restart leads 
> nn safemode
> 
>
> Key: HDFS-5428
> URL: https://issues.apache.org/jira/browse/HDFS-5428
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-5428-v2.patch, HDFS-5428.000.patch, 
> HDFS-5428.001.patch, HDFS-5428.patch
>
>
> 1. allow snapshots under dir /foo
> 2. create a file /foo/test/bar and start writing to it
> 3. create a snapshot s1 under /foo after block is allocated and some data has 
> been written to it
> 4. Delete the directory /foo/test
> 5. wait till checkpoint or do saveNameSpace
> 6. restart NN.
> NN enters to safemode.
> Analysis:
> Snapshot nodes loaded from fsimage are always complete and all blocks will be 
> in COMPLETE state. 
> So when the Datanode reports RBW blocks those will not be updated in 
> blocksmap.
> Some of the FINALIZED blocks will be marked as corrupt due to length mismatch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5476) Snapshot: clean the blocks/files/directories under a renamed file/directory while deletion

2013-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817545#comment-13817545
 ] 

Hudson commented on HDFS-5476:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4705 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4705/])
HDFS-5476. Snapshot: clean the blocks/files/directories under a renamed 
file/directory while deletion. Contributed by Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1540142)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileUnderConstructionWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java


> Snapshot: clean the blocks/files/directories under a renamed file/directory 
> while deletion
> --
>
> Key: HDFS-5476
> URL: https://issues.apache.org/jira/browse/HDFS-5476
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5476.001.patch
>
>
> Currently DstReference#destroyAndCollectBlocks may fail to clean the subtree 
> under the DstReference node for file/directory/snapshot deletion.
> Use case 1:
> # rename under-construction file with 0-sized blocks after snapshot.
> # delete the renamed directory.
> We need to make sure we delete the 0-sized block.
> Use case 2:
> # create snapshot s0 for /
> # create a new file under /foo/bar/
> # rename foo --> foo2
> # create snapshot s1
> # delete bar and foo2
> # delete snapshot s1
> We need to make sure we delete the file under /foo/bar since it is not 
> included in snapshot s0.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5476) Snapshot: clean the blocks/files/directories under a renamed file/directory while deletion

2013-11-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5476:


   Resolution: Fixed
Fix Version/s: 2.3.0
   Status: Resolved  (was: Patch Available)

Thanks for the review, Nicholas and Vinay! I've committed this to trunk and 
branch-2.

> Snapshot: clean the blocks/files/directories under a renamed file/directory 
> while deletion
> --
>
> Key: HDFS-5476
> URL: https://issues.apache.org/jira/browse/HDFS-5476
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.3.0
>
> Attachments: HDFS-5476.001.patch
>
>
> Currently DstReference#destroyAndCollectBlocks may fail to clean the subtree 
> under the DstReference node for file/directory/snapshot deletion.
> Use case 1:
> # rename under-construction file with 0-sized blocks after snapshot.
> # delete the renamed directory.
> We need to make sure we delete the 0-sized block.
> Use case 2:
> # create snapshot s0 for /
> # create a new file under /foo/bar/
> # rename foo --> foo2
> # create snapshot s1
> # delete bar and foo2
> # delete snapshot s1
> We need to make sure we delete the file under /foo/bar since it is not 
> included in snapshot s0.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5482) CacheAdmin -removeDirectives fails on relative paths but -addDirective allows them

2013-11-08 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-5482:
-

 Summary: CacheAdmin -removeDirectives fails on relative paths but 
-addDirective allows them
 Key: HDFS-5482
 URL: https://issues.apache.org/jira/browse/HDFS-5482
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Stephen Chu


CacheAdmin -addDirective allows using a relative path.

However, -removeDirectives will error complaining with 
"java.net.URISyntaxException: Relative path in absolute URI"

{code}
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -addDirective -path foo -pool schu
Added PathBasedCache entry 3
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -listDirectives
Found 1 entry
ID  POOL  PATH   
3   schu  /user/schu/foo 
[schu@hdfs-c5-nfs ~]$ hdfs cacheadmin -removeDirectives -path foo
Exception in thread "main" java.lang.IllegalArgumentException: 
java.net.URISyntaxException: Relative path in absolute URI: 
hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
at org.apache.hadoop.fs.Path.makeQualified(Path.java:470)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listPathBasedCacheDirectives(DistributedFileSystem.java:1639)
at 
org.apache.hadoop.hdfs.tools.CacheAdmin$RemovePathBasedCacheDirectivesCommand.run(CacheAdmin.java:287)
at org.apache.hadoop.hdfs.tools.CacheAdmin.run(CacheAdmin.java:82)
at org.apache.hadoop.hdfs.tools.CacheAdmin.main(CacheAdmin.java:87)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
hdfs://hdfs-c5-nfs.ent.cloudera.com:8020foo/foo
at java.net.URI.checkPath(URI.java:1788)
at java.net.URI.(URI.java:734)
at org.apache.hadoop.fs.Path.makeQualified(Path.java:467)
... 4 more
[schu@hdfs-c5-nfs ~]$ 
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13817529#comment-13817529
 ] 

Arpit Agarwal commented on HDFS-5481:
-

Thanks for the patch Junping.

{{FsDatasetSpi#getBlockReport}} is going to be removed (HDFS-5429). We now use 
{{FsDatasetSpi#getBlockReports}} to generate per-volume block reports, example 
usage in {{TestBlockReport#getBlockReports}}.

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-5481:
-

Attachment: (was: HDFS-5481.patch)

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5481) Fix TestDataNodeVolumeFailure

2013-11-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-5481:
-

Attachment: HDFS-5481.patch

> Fix TestDataNodeVolumeFailure
> -
>
> Key: HDFS-5481
> URL: https://issues.apache.org/jira/browse/HDFS-5481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Heterogeneous Storage (HDFS-2832)
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HDFS-5481.patch
>
>
> In test case, it still use datanodeID to generate storage report. Replace 
> with storageID should work well.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >